Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
25,626,359 | 2014-09-02T14:53:00.000 | 0 | 0 | 1 | 0 | python,eclipse,debugging,pydev | 25,628,873 | 1 | false | 0 | 0 | I am using the latest pydev and I find the interactive console is still interactive :-) Note that no encouraging prompt is present at the console (e.g. no ">") but if you type one of the variables you see in the variables window you will get a value.. can manipulate etc.
My terminology might be a bit lose. If you mean by interactive console the full ">" console then it is tricky to get that to work during debugging. There is a pydev variable you can set to link it to the debug session but I find it a hassle still.. you have to explicitly switch to such a console.. given a command.. it then throws you back to the normal debug console (which is the one I was referencing as still sensitive to typing variable names etc).. Perhaps I am doing something wrong though for it to be so awkward. I posted on this a few weeks back but there was no reply. I too would like to do debugging in the full console with no hassle. In particular I would like to be able to use its command history to more efficiently manipulate things.
But regardless you can still debug and look at variables just not with the full feature console easily.
Also be aware there seems to be a bug lately (last few releases) where the variables view stays blank. I find that if I close it and reopen it then the variables appear.
Good luck | 1 | 0 | 0 | My issue with the interactive console is twofold:
When I set a breakpoint in my python code, the execution pauses as expected at the breakpoint and displays all my variables in the "Variables" view. However, the interactive console is not very interactive anymore. I would like to be able to play around with the variables when execution is still paused at the breakpoint.
Ideally I would like to have this same behaviour if I'm not debugging but just working in the interactive console. Is there a way to couple the interactive console to the "Variables" view of the "Debug" perspective. When I open an interactive console now the variables view remains empty.
I am running a fresh install of Eclipse Juno (4.4.0) with PyDev (3.7.0). | PyDev interactive console integration with Variables view (Debug perspective) | 0 | 0 | 0 | 754 |
25,627,100 | 2014-09-02T15:30:00.000 | 0 | 0 | 0 | 0 | python,matplotlib,ubuntu-12.04 | 25,691,635 | 1 | true | 0 | 0 | Okay, the problem was in gcc version. During building and creating wheel of package pip uses system gcc (which version is 4.7.2). I'm using python from virtualenv, which was built with gcc 4.4.3. So version of libstdc++ library is different in IPython and one that pip used.
As always there are two solutions (or even more): pass LD_PRELOAD environment variable with correct libstdc++ before entering IPython or to use same version of gcc during creating wheel and building virtualenv. I prefred the last one.
Thank you all. | 1 | 0 | 1 | I'm trying to use matplotlib on Ubuntu 12.04. So I built a wheel with pip:
python .local/bin/pip wheel --wheel-dir=wheel/ --build=build/ matplotlib
Then successfully installed it:
python .local/bin/pip install --user --no-index --find-links=wheel/ --build=build/ matplotlib
But when I'm trying to import it in ipython ImportError occures:
In [1]: import matplotlib
In [2]: matplotlib.get_backend()
Out[2]: u'agg'
In [3]: import matplotlib.pyplot
ImportError
Traceback (most recent call
last) /place/home/yefremat/ in
()
----> 1 import matplotlib.pyplot
/home/yefremat/.local/lib/python2.7/site-packages/matplotlib/pyplot.py
in ()
32 from matplotlib import docstring
33 from matplotlib.backend_bases import FigureCanvasBase
---> 34 from matplotlib.figure import Figure, figaspect
35 from matplotlib.gridspec import GridSpec
36 from matplotlib.image import imread as _imread
/home/yefremat/.local/lib/python2.7/site-packages/matplotlib/figure.py
in ()
38 import matplotlib.colorbar as cbar
39
---> 40 from matplotlib.axes import Axes, SubplotBase, subplot_class_factory
41 from matplotlib.blocking_input import BlockingMouseInput, BlockingKeyMouseInput
42 from matplotlib.legend import Legend
/home/yefremat/.local/lib/python2.7/site-packages/matplotlib/axes/init.py
in ()
2 unicode_literals)
3
----> 4 from ._subplots import *
5 from ._axes import *
/home/yefremat/.local/lib/python2.7/site-packages/matplotlib/axes/_subplots.py
in ()
8 from matplotlib import docstring
9 import matplotlib.artist as martist
---> 10 from matplotlib.axes._axes import Axes
11
12 import warnings
/home/yefremat/.local/lib/python2.7/site-packages/matplotlib/axes/_axes.py
in ()
36 import matplotlib.ticker as mticker
37 import matplotlib.transforms as mtransforms
---> 38 import matplotlib.tri as mtri
39 import matplotlib.transforms as mtrans
40 from matplotlib.container import BarContainer, ErrorbarContainer, StemContainer
/home/yefremat/.local/lib/python2.7/site-packages/matplotlib/tri/init.py
in ()
7 import six
8
----> 9 from .triangulation import *
10 from .tricontour import *
11 from .tritools import *
/home/yefremat/.local/lib/python2.7/site-packages/matplotlib/tri/triangulation.py
in ()
4 import six
5
----> 6 import matplotlib._tri as _tri
7 import matplotlib._qhull as _qhull
8 import numpy as np
ImportError:
/home/yefremat/.local/lib/python2.7/site-packages/matplotlib/_tri.so:
undefined symbol: _ZNSt8__detail15_List_node_base9_M_unhookEv
May be I'm doing somethig wrong? Or may be there is a way to turn off gui support of matplotlib?
Thanks in advance. | Installing matplotlib via pip on Ubuntu 12.04 | 1.2 | 0 | 0 | 520 |
25,629,092 | 2014-09-02T17:31:00.000 | -1 | 0 | 0 | 0 | python,mysql,django,sqlite,mysql-python | 25,630,191 | 2 | false | 1 | 0 | Try the followings steps:
1. Change DATABASES in settings.py to MYSQL engine
2. Run $ ./manage.py syncdb | 1 | 0 | 0 | I need help switching my database engine from sqlite to mysql. manage.py datadump is returning the same error that pops up when I try to do anything else with manage.py : ImproperlyConfigured: Error loading MySQL module, No module named MySQLdb.
This django project is a team project. I pulled new changes from bitbucket and our backend has a new configuration. This new configuration needs mysql (and not sqlite) to work. Our lead dev is sleeping right now. I need help so I can get started working again.
Edit: How will I get the data in the sqlite database file into the new MySQL Database? | How can I switch my Django project's database engine from Sqlite to MySQL? | -0.099668 | 1 | 0 | 4,742 |
25,629,462 | 2014-09-02T17:56:00.000 | 2 | 0 | 0 | 1 | python,linux,excel,vagrant,pywin32 | 25,629,595 | 1 | true | 0 | 0 | The short answer is, you can't. WINE does not expose a bottled Windows environment's COM registry out to linux—and, even if it did, pywin32 doesn't build on anything but Windows.
So, here are some options, roughly ordered from the least amount of change to your code and setup to the most:
Run both your Python script and Excel under real Windows, inside a real emulator.
Run both your Python script and Excel under WINE.
Write or find a library that does expose a bottled Windows environment's COM registry out to Linux.
Write or find a cross-platform DCOM library that presents a win32com-like API, then change your code to use that to connect to the bottled Excel remotely.
Rewrite your code to script Excel indirectly by, e.g., sshing into a Windows box and running minimal WSH scripts.
Rewrite your code to script LibreOffice or whatever you prefer instead of Excel.
Rewrite your code to process Excel files (or CSV or some other interchange format) directly instead of scripting Excel. | 1 | 0 | 0 | I have written an extensive python package that utilizes excel and pywin32.
I am now in the progress of moving this package to a linux environment on a Vagrant machine.
I know there are "emulator-esque" software packages (e.g. WINE) that can run Windows applications and look-a-likes for some Windows applications (e.g. Excel to OpenOffice).
However, I am not seeing the right path to take in order to get my pywin32/Excel dependent code written for Windows running in a Linux environment on a Vagrant machine. Ideally, I would not have to alter my code at all and just do the appropriate installs on my Vagrant machine.
Thanks | Porting Python on Windows using pywin32/excel to Linux on Vagrant Machine | 1.2 | 1 | 0 | 850 |
25,630,381 | 2014-09-02T18:55:00.000 | 0 | 0 | 1 | 0 | python-2.7,gpib | 25,654,559 | 1 | false | 0 | 0 | Yeah! I did it!
I am using a keithley KUSB-488a Gpib to usb cable to connect to my instrument.
As soon as I updated the GPIB/KUSB 488a drivers from version 9.1 to version 9.2 my computer stopped mysteriously crashing when I close my python interpreter after communicating with the instrument.
Moral of the story:
Whenever controlling Keithley or Agilent hardware with python make sure you have the latest
drivers for all hardware and connectors or you will be in for a world of hurt. :) | 1 | 0 | 0 | If I connect to any one of my GPIB instruments using pyvisa and then I try to close
my python interpreter, my computer crashes (then automatically restarts). I only have this issue after connecting to a GPIB instrument with pyvisa, otherwise I can control my GPIB instruments and use my interpreter without any issues.
Here are the python tools I'm using:
python 2.7
python xy 2.7.5.1
spyder 2.2.4
Has anyone else encountered this problem? And what do you think I can do to fix it?
Thanks for your help. | computer crashes after I close my python interpreter after connecting to GPIB instrument | 0 | 0 | 0 | 152 |
25,632,301 | 2014-09-02T21:01:00.000 | 0 | 0 | 0 | 0 | python,google-app-engine,data-structures,google-cloud-datastore,app-engine-ndb | 25,632,434 | 2 | false | 1 | 0 | Why not just put a boolean in your "BlogPost" Entity, 0 if it's past, 1 if it's future? will let you query them separately easily. | 1 | 0 | 0 | Let's take an example on which I run a blog that automatically updates its posts.
I would like to keep an entity of class(=model) BlogPost in two different "groups", one called "FutureBlogPosts" and one called "PastBlogPosts".
This is a reasonable division that will allow me to work with my blog posts efficiently (query them separately etc.).
Basically the problem is the "kind" of my model will always be "BlogPost". So how can I separate it into two different groups?
Here are the options I found so far:
Duplicating the same model class code twice (once FutureBlogPost class and once PastBlogPost class (so their kinds will be different)) -- seems quite ridiculous.
Putting them under different anchestors (FutureBlogPost, "SomeConstantValue", BlogPost, #id) but this method also has its implications (1 write per second?) and also the whole ancestor-child relationship doesn't seem fit here. (and why do I have to use "SomeConstantValue" if I choose that option?)
Using different namespaces -- seems too radical for such a simple separation
What is the right way to do it? | Same Kind Entities Groups | 0 | 0 | 0 | 52 |
25,635,116 | 2014-09-03T02:05:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,if-statement,boolean | 25,635,143 | 2 | false | 0 | 0 | foo == (8 or 9) is not the same as foo == 8 or foo == 9and the latter is the correct form.
(8 or 9) evaluates to 8 since in Python or evaluates to the first operand (left to right) that is 'truthy', or False if neither is, so the check becomes a plain foo == 8. | 1 | 3 | 0 | When programming in python, when you are checking if a statement is true, would it be more correct to use foo == (8 or 9) or foo == 8 or foo == 9? Is it just a matter of what the programmer chooses to do? I am wondering about python 2.7, in case it is different in python 3. | In python, is 'foo == (8 or 9)' or 'foo == 8 or foo == 9' more correct? | 0.197375 | 0 | 0 | 106 |
25,636,670 | 2014-09-03T05:11:00.000 | 5 | 0 | 1 | 0 | python,nlp,artificial-intelligence,classification,nltk | 25,636,786 | 2 | true | 0 | 0 | Not without a corpus, no.
Look at it this way: can you, an intelligent being, tell whether 光 is related to 部屋に入った時電気をつけました without asking someone or something that actually knows Japanese (assuming you don't know Japanese; if you do, try with "svjetlo" and "Kad je ušao u sobu, upalio je lampu"). If you can't, how do you expect a computer to do it?
And another experiment - can you, an intelligent being, give me the algorithm by which you can teach a non-english-speaking person that "light" is related to "When he entered the room, he turned on the lamp"? Again, no.
tl;dr: You need training data, unless you significantly restrict the meaning of "related" (to "contains", for example). | 1 | 5 | 0 | I have a word, according to that i want to find out whether the text is related to that word or not using python and nltk is it possible ?
For example I have a word called "phosphorous". I would like to find out that the particular text file is related to this word or not?
I cant use bag of words in nltk as I have only one word and no training data.
Any Suggestions?
Thanks in Advance. | Word and Text relation using python and NLP | 1.2 | 0 | 0 | 988 |
25,638,581 | 2014-09-03T07:25:00.000 | 0 | 0 | 0 | 0 | python,email,openerp,openerp-7 | 25,640,818 | 3 | false | 1 | 0 | You can check the automated actions from OpenERP in the Settings/Technical/Scheduler/Scheduled Actions menu. Look for the actions that read incoming e-mails and de-activate it. | 2 | 0 | 0 | We are using OpenERP 7 to manage leads within our organisation.
Leads are created by incoming emails. When assigning to a different sales person, the sales person gets an email with the original email and the from address is the original person that emailed it.
This is a problem because it looks like the customer emailed them directly and encourages the sales person to manage the lead from their email, rather than sending responses from the OpenERP system. How can I stop this email from being sent? I want to make my own template and use an automatic action to send a notification.
There is no automatic action sending this email. I believe it is somewhere in the python code. | Stop Automatic Lead Email | 0 | 0 | 0 | 278 |
25,638,581 | 2014-09-03T07:25:00.000 | 0 | 0 | 0 | 0 | python,email,openerp,openerp-7 | 25,680,204 | 3 | false | 1 | 0 | I found a hack solution which I hope someone can improve.
Basically the email comes in and adds an entry to the table mail_message. The type is set as "email" and this seems to be the issue. If I change it to "notification", the original email does not get sent to the newly assigned salesperson which is the behaviour that I want.
Create a server action on the incoming email server that executes the following python code:
cr.execute("UPDATE mail_message SET type = 'notification' WHERE model = 'crm.lead' AND res_id = %s AND type = 'email' ", (object.id, )) | 2 | 0 | 0 | We are using OpenERP 7 to manage leads within our organisation.
Leads are created by incoming emails. When assigning to a different sales person, the sales person gets an email with the original email and the from address is the original person that emailed it.
This is a problem because it looks like the customer emailed them directly and encourages the sales person to manage the lead from their email, rather than sending responses from the OpenERP system. How can I stop this email from being sent? I want to make my own template and use an automatic action to send a notification.
There is no automatic action sending this email. I believe it is somewhere in the python code. | Stop Automatic Lead Email | 0 | 0 | 0 | 278 |
25,639,073 | 2014-09-03T07:53:00.000 | 0 | 0 | 0 | 0 | python-2.7,user-interface,kivy | 25,639,632 | 2 | false | 0 | 1 | I'm not aware of a simple way to do this - it's not exposed in the kivy window api, and may not even be exposed in pygame (which is likely the backend you're using on desktop).
Maybe you can look up the right way to do it on each system you target, e.g. I think you can hint it to X11, but I don't know if this is really plausible or nice.
We're developing a new sdl2 backend, which is more flexible about some of these kinds of things, but I don't know if it would make this possible. | 1 | 5 | 0 | How I can remove or hide the default minimize or maximize button of a window created with kivy. | How to hide/remove the default minimize/maximize buttons on window developed with Kivy? | 0 | 0 | 0 | 1,674 |
25,641,782 | 2014-09-03T10:16:00.000 | 0 | 1 | 0 | 0 | python | 25,641,891 | 1 | true | 0 | 0 | Use __file__.
You can use os.path.basename(__file__) | 1 | 1 | 0 | i would like to get the name of the "file.py" that I am executing with ironpython.
I have to read and save data to other files with the same name start.
Thank you very much!
Humano | How to know the filename of script.py while executing it with ironpython | 1.2 | 0 | 0 | 189 |
25,643,996 | 2014-09-03T12:12:00.000 | 0 | 0 | 0 | 0 | python,selenium,screenshot,python-imaging-library | 25,644,516 | 1 | false | 1 | 0 | You can scrool with driver.execute_script method and then take a screenshot.
I scroll some modal windowns this way with jQuery:
driver.execute_script("$('.ui-window-wrap:visible').scrollTop(document.body.scrollHeight);") | 1 | 0 | 0 | I need to take a screenshot of a particular given dom element including the area inside scroll region.
I tried to take a screen shot of entire web page using selenium and crop the image using Python Imaging Library with the dimensions given by selenium. But I couldnt figure out a way to capture the are under scroll region.
for example I have a class element container in my page and it is height is dynamic based on the content. I need to take screenshot of it entirely. but the resulting image skips the region inside scrollbar and the cropped image results with just the scroll bar in it
Is there any way to do this? Solution using selenium is preferable, if it cannot be done with selenium alternate solution will also do. | Taking screenshot of particular div element including the area inside scroll region using selenium and python | 0 | 0 | 1 | 1,055 |
25,644,432 | 2014-09-03T12:32:00.000 | 0 | 0 | 0 | 0 | python,tags,extract | 25,644,601 | 3 | false | 1 | 0 | Visit the site to find the URL that shows the information you want, then look at the page source to see how it has been formatted. | 1 | 0 | 0 | My problem is that I want to create a data base of all of the questions, answers, and most importantly, the tags, from a certain (somewhat small) Stack Exchange. The relationships among tags (e.g. tags more often used together have a strong relation) could reveal a lot about the structure of the community and popularity or interest in certain sub fields.
So, what is the easiest way to go through a list of questions (that are positively ranked) and extract the tag information using Python? | How to scrape tag information from questions on Stack Exchange | 0 | 0 | 1 | 191 |
25,645,275 | 2014-09-03T13:11:00.000 | 3 | 0 | 0 | 0 | python,django,django-south | 25,648,138 | 1 | true | 1 | 0 | From a blog post I can't find anymore, the best way is to create two distinct directories:
one new_migrations which will handle the migrations files (django 1.7), and another one old_migrations which will handle (if you need to) the downgrade part.
In order to do it, move your migrations folder to old_migrations, then recreate all your schema with the migrations built-in :)
In case of downgrade, just move your old directory and use South as before. | 1 | 0 | 0 | I have a project based on Django 1.6 with South. I wonder is it possible to upgrade my project to Django 1.7 with new built-in database migration system and save possibility to downgrade database to previous statements? | Django 1.7 - migrations from South | 1.2 | 0 | 0 | 192 |
25,646,200 | 2014-09-03T13:53:00.000 | 35 | 0 | 1 | 0 | python,pandas,timedelta | 48,203,281 | 6 | false | 0 | 0 | Timedelta objects have read-only instance attributes .days, .seconds, and .microseconds. | 1 | 164 | 1 | I would like to create a column in a pandas data frame that is an integer representation of the number of days in a timedelta column. Is it possible to use 'datetime.days' or do I need to do something more manual?
timedelta column
7 days, 23:29:00
day integer column
7 | Python: Convert timedelta to int in a dataframe | 1 | 0 | 0 | 343,961 |
25,648,393 | 2014-09-03T15:36:00.000 | 0 | 0 | 0 | 0 | python,mysql,database,django,schema-migration | 43,198,881 | 11 | false | 1 | 0 | change the names of old models to ‘model_name_old’
makemigrations
make new models named ‘model_name_new’ with identical relationships on the related models
(eg. user model now has user.blog_old and user.blog_new)
makemigrations
write a custom migration that migrates all the data to the new model tables
test the hell out of these migrations by comparing backups with new db copies before and after running the migrations
when all is satisfactory, delete the old models
makemigrations
change the new models to the correct name ‘model_name_new’ -> ‘model_name’
test the whole slew of migrations on a staging server
take your production site down for a few minutes in order to run all migrations without users interfering
Do this individually for each model that needs to be moved.
I wouldn’t suggest doing what the other answer says by changing to integers and back to foreign keys
There is a chance that new foreign keys will be different and rows may have different IDs after the migrations and I didn’t want to run any risk of mismatching ids when switching back to foreign keys. | 2 | 147 | 0 | So about a year ago I started a project and like all new developers I didn't really focus too much on the structure, however now I am further along with Django it has started to appear that my project layout mainly my models are horrible in structure.
I have models mainly held in a single app and really most of these models should be in their own individual apps, I did try and resolve this and move them with south however I found it tricky and really difficult due to foreign keys ect.
However due to Django 1.7 and built in support for migrations is there a better way to do this now? | How to move a model between two Django apps (Django 1.7) | 0 | 0 | 0 | 38,041 |
25,648,393 | 2014-09-03T15:36:00.000 | 0 | 0 | 0 | 0 | python,mysql,database,django,schema-migration | 33,096,296 | 11 | false | 1 | 0 | Lets say you are moving model TheModel from app_a to app_b.
An alternate solution is to alter the existing migrations by hand. The idea is that each time you see an operation altering TheModel in app_a's migrations, you copy that operation to the end of app_b's initial migration. And each time you see a reference 'app_a.TheModel' in app_a's migrations, you change it to 'app_b.TheModel'.
I just did this for an existing project, where I wanted to extract a certain model to an reusable app. The procedure went smoothly. I guess things would be much harder if there were references from app_b to app_a. Also, I had a manually defined Meta.db_table for my model which might have helped.
Notably you will end up with altered migration history. This doesn't matter, even if you have a database with the original migrations applied. If both the original and the rewritten migrations end up with the same database schema, then such rewrite should be OK. | 2 | 147 | 0 | So about a year ago I started a project and like all new developers I didn't really focus too much on the structure, however now I am further along with Django it has started to appear that my project layout mainly my models are horrible in structure.
I have models mainly held in a single app and really most of these models should be in their own individual apps, I did try and resolve this and move them with south however I found it tricky and really difficult due to foreign keys ect.
However due to Django 1.7 and built in support for migrations is there a better way to do this now? | How to move a model between two Django apps (Django 1.7) | 0 | 0 | 0 | 38,041 |
25,648,888 | 2014-09-03T16:03:00.000 | 0 | 0 | 0 | 1 | python,hadoop,port,conflict | 25,667,671 | 2 | false | 0 | 0 | It appears Cloudera installed python 2.7. This was removed / replace with python 3.2.
The $jps command on Hadoop now returns the expected results including NameNode. | 1 | 0 | 0 | I am installing Hadoop 2.5.0 on a Ubuntu 12.04 cluster, 64-bit. At the end of the instructions I type $ jps on the master node and do not get a NameNode. I checked the Hadoop logs and found:
BindException error stating :9000 is already in use.
$ netstat -a -t --numeric-ports -p | grep :9000 returns that python is listening on this port. It appears I need to move python 2.7 to another port. How do I move python?
Followed the command below, the pid=2346.
$ ps -p 2346
PID TTY TIME CMD
2346 ? 01:28:13 python
Tried second command:
$ ps -lp 2346
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
4 S 0 2346 1 0 80 0 - 332027 poll_s ? 01:28:30 python
more detail:
$ ps -Cp 2346
PID TTY STAT TIME COMMAND
2346 ? Ssl 88:34 /usr/lib/cmf/agent/build/env/bin/python /usr/lib/cmf/agent/src/cmf/agent.py --package_dir /usr/lib/cmf
It appears a failed Cloudera Hadoop distribution installation has not been removed. It installed python 2.7 automatically. Not sure what else is automatically running. Will attempt to uninstall python 2.7. | port conflict between Hadoop and python | 0 | 0 | 0 | 430 |
25,648,912 | 2014-09-03T16:04:00.000 | 0 | 0 | 0 | 1 | python-2.7,tkinter | 25,689,654 | 1 | true | 0 | 1 | I am using Linux Mint. In order to make a program not show up in the foreground (i.e. be hidden behind all of the other windows), one should use root.lower() as aforementioned in the comments. However, please note (and this seems to happen on multiple platforms) that root.lower() will not change the focus of the window. Therefore, even if you use .lower() and run the script, and if you press [alt] + [F4], for example, the Tkinter window that was just opened (even though you cannot see it) will be closed.
I noticed, however, that it is prudent to place the root.lower() after attributes for the Tkinter root. For example, if you use root.attributes("-zoomed", True) to expand the window, be sure to place root.lower() after the root.attributes(..). Moreover, it did not work for me when I put root.lower() before root.attributes(..). | 1 | 0 | 0 | Comically enough, I was really annoyed when tkinter windows opened in the background on Mac. However, now I am on Linux, and I want tkinter to open in background.
I don't know how to do this, and when I google how to do it, all I can find are a lot of angry Mac users who can't get tkinter to open in the foreground.
I should note that I am using python2.7 and thus Tkinter not tkinter (very confusing). | Start Tkinter in background | 1.2 | 0 | 0 | 184 |
25,653,118 | 2014-09-03T20:31:00.000 | 1 | 0 | 1 | 0 | python | 25,684,698 | 2 | false | 0 | 0 | As far as I remember, there is not an uninstaller, as it only copies files to a destination directory. However, you can check if it does exist at the folder (it would say sth as WinPython...uninstaller.exe. If it does not, just take a look at the Windows control panel (there won't be anything probably, but let's try), or directly uninstall individual packages from Winpython and then delete all. | 1 | 11 | 0 | I have WinPython-64bit-2.7.6.4
I'd like to uninstall all of it but I'm not sure how.
Is there an uninstaller I can run? Or do I have to go into the WinPython control panel,
uninstall individual packages and then delete everything? | How can I uninstall WinPython? | 0.099668 | 0 | 0 | 13,563 |
25,653,881 | 2014-09-03T21:22:00.000 | 4 | 0 | 0 | 1 | python,linux,proc,tmpfs | 25,654,779 | 3 | false | 0 | 0 | Pass /proc/self/fd/1 as the filename to the child program. All of the writes to /proc/self/fd/1 will actually go to the child program's stdout. Use subprocess.Popen(), et al, to capture the child's stdout. | 1 | 2 | 0 | A program that I can not modify writes it's output to a file provided as an argument. I want to have the output to go to RAM so I don't have to do unnecessary disk IO.
I thought I can use tmpfs and "trick" the program to write to that, however not all Linux distros use tmpfs for /tmp, some mount tmpfs under /run (Ubuntu) others under /dev/shm (RedHat).
I want my program to be as portable as possible and I don't want to create tmpfs file systems on the user's system if I can avoid it.
Obviously I can do df | grep tmpfs and use whatever mount that returns, but I was hoping for something a bit more elegant.
Is it possible to write to a pseudo terminal or maybe to /proc somewhere? | How to write file to RAM on Linux | 0.26052 | 0 | 0 | 5,650 |
25,654,076 | 2014-09-03T21:38:00.000 | 0 | 0 | 1 | 0 | python,perl,module,packages | 25,654,430 | 1 | true | 0 | 0 | False dilemma. You can do both at once.
You can make a module that you will use in your scripts, and then when it comes time to deploy the scripts, include it with them. Either include it as a local module file, or actually roll up the module and the script into a single file.
That way, your stuff does not require any modules to be installed separately. | 1 | 3 | 0 | I have always a dilemma with the choice of making a module/package or keeping a script standalone.
I often write small scripts/programs in Perl or Python that do some work. Sometime I use the same subroutine in several programs but they are only small subroutines and here comes my dilemma.
If I keep my script standalone, anybody can run it without installing any packages. If it is a single file it can be used from anywhere. However, if I make a module, my users will need to install it before using my script. I will need to consider the exception that the dependent modules are not available and later, the users could encounter issues during the installation of the required modules.
So, in order to avoid dependencies issues. I prefer having a bit of redundancy and not using any additional packages (only if I can do it. Obviously I won't reimplement XML::Twig in all my scripts, but I could do it for a INI parser or a Perl to JSON converter).
Also, I usually put all my scripts in the same directory like /usr/local/mycompany/bin
What would be the best strategy to adopt for scripts/programs that not exceed 200 lines and that are be used by less than 20 people ?
EDIT:
I am not asking for personal opinion. I am only looking for a very pragmatic and rational answer from those who have a good programming experience.
To give you a more concrete example. I have a script that parse a configuration file in a proprietary format. This format is used by many people in my company. However only my scripts use my parser. I think I have only three choices:
Placing the parser (about 50 lines) in each of my 5 scripts.
Making a nice module that need to be installed in the Perl/Python distribution.
Using a standalone library located in the same directory of the other scripts. | standalone script or as module? | 1.2 | 0 | 0 | 1,034 |
25,655,455 | 2014-09-04T00:10:00.000 | 0 | 0 | 0 | 0 | python,pypdf | 72,240,023 | 1 | false | 0 | 0 | pypdf has no direct support for it - it's not impossible, but it would require significant effort from your side. As pypdf is no longer maintained, this will not change.
PyPDF2 also does not have this at the moment (May 2022), but I'm open to a PR adding this support. | 1 | 7 | 0 | I am using pyPdf to extract text from a PDF. I would like to be able to know which text is bold in order to identify bold section headers. How can I identify bold text? | Identifying Bold Text in PDF using pyPdf | 0 | 0 | 0 | 695 |
25,655,755 | 2014-09-04T00:50:00.000 | 1 | 0 | 0 | 0 | python,flask,sqlalchemy,flask-sqlalchemy | 25,655,829 | 2 | false | 1 | 0 | If you have two tables with the same columns, your database schema could probably be done better. I think you should really have a table called CarMake, with entries for Toyota, Honda etc, and another table called Car which has a foreign key to CarMake (e.g. via a field called car_make or similar).
That way, you could represent this in Flask with two models - one for Car and one for CarMake. | 1 | 0 | 0 | If I have two tables with same columns, ex. a table called Toyota and a table called Honda,
how can I map these two tables with one model (maybe called Car) in flask? | How can you map multiple tables to one model? | 0.099668 | 0 | 0 | 703 |
25,660,064 | 2014-09-04T07:53:00.000 | 10 | 1 | 0 | 0 | python,fixtures,pytest | 28,593,102 | 5 | false | 0 | 0 | I was just having this problem with two function-scoped autouse fixtures. I wanted fixture b to run before fixture a, but every time, a ran first. I figured maybe it was alphabetical order, so I renamed a to c, and now b runs first. Pytest doesn't seem to have this documented. It was just a lucky guess. :-)
That's for autouse fixtures. Considering broader scopes (eg. module, session), a fixture is executed when pytest encounters a test that needs it. So if there are two tests, and the first test uses a session-scoped fixture named sb and not the one named sa, then sb will get executed first. When the next test runs, it will kick off sa, assuming it requires sa. | 2 | 49 | 0 | For an application I'm testing I'd like to create an autouse=True fixture which monkeypatches smtplib.SMTP.connect to fail tests if they try to send an email unexpectedly.
However, in cases where I do expect tests to send emails, I want to use a different fixture logging those emails instead (most likely by using the smtpserver fixture from pytest-localserver and monkeypatching the connect method to use the host/port returned by that fixture)
Of course that can only work if the autouse fixture is executed before the other fixture (loaded as funcarg). Is there any specific order in which fixtures are executed and/or is there a way to guarantee the execution order? | In which order are pytest fixtures executed? | 1 | 0 | 0 | 26,370 |
25,660,064 | 2014-09-04T07:53:00.000 | 2 | 1 | 0 | 0 | python,fixtures,pytest | 25,709,928 | 5 | false | 0 | 0 | IIRC you can rely on higher scoped fixtures to be executed first. So if you created a session scoped autouse fixture to monkeypatch smtplib.SMTP.connect then you could create a function-scoped fixture which undoes this monkeypatching for one test, restoring it afterwards. I assume the easiest way to do this is create your own smtpserver fixture which depends on both the disallow_smtp fixture as well as the smtpserver fixture from pytest-localserver and then handles all setup and teardown required to make these two work together.
This is vaguely how pytest-django handles it's database access btw, you could try and look at the code there but it is far from a simple example and has many of it's own weird things. | 2 | 49 | 0 | For an application I'm testing I'd like to create an autouse=True fixture which monkeypatches smtplib.SMTP.connect to fail tests if they try to send an email unexpectedly.
However, in cases where I do expect tests to send emails, I want to use a different fixture logging those emails instead (most likely by using the smtpserver fixture from pytest-localserver and monkeypatching the connect method to use the host/port returned by that fixture)
Of course that can only work if the autouse fixture is executed before the other fixture (loaded as funcarg). Is there any specific order in which fixtures are executed and/or is there a way to guarantee the execution order? | In which order are pytest fixtures executed? | 0.07983 | 0 | 0 | 26,370 |
25,668,522 | 2014-09-04T14:48:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,python-2.7 | 33,986,803 | 4 | false | 1 | 0 | I face this issue too. It has to do with the application you are running. If you are sure it runs perfectly fine, then it may be over burdening the server in a way. I strongly recommend logging relevant aspect of your code so it displays any issue in the log console. Hope this helps | 3 | 12 | 0 | I am running development web server in Google App Engine Launcher without any troubles.
But I can't successfully stop it. When I am press Stop button, nothing happens.
Nothing adds in logs after pressing Stop.
And after that I can't close launcher.
The only way to close launcher is Task Manager.
Although when I am using dev_appserver.py myapp via cmd it is successfully stopped by Ctrl+C.
By the way I am under proxy. | Can't stop web server in Google App Engine Launcher | 0 | 0 | 0 | 981 |
25,668,522 | 2014-09-04T14:48:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,python-2.7 | 28,325,931 | 4 | false | 1 | 0 | I think your server is crashed because maybe you overloaded it or maybe there's an internal error that can be solved by re-installing the web-server. | 3 | 12 | 0 | I am running development web server in Google App Engine Launcher without any troubles.
But I can't successfully stop it. When I am press Stop button, nothing happens.
Nothing adds in logs after pressing Stop.
And after that I can't close launcher.
The only way to close launcher is Task Manager.
Although when I am using dev_appserver.py myapp via cmd it is successfully stopped by Ctrl+C.
By the way I am under proxy. | Can't stop web server in Google App Engine Launcher | 0 | 0 | 0 | 981 |
25,668,522 | 2014-09-04T14:48:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,python-2.7 | 27,682,317 | 4 | false | 1 | 0 | This is just a suggestion, but I think if you overloaded the server by repeatedly pinging the IP, you could crash the webserver. | 3 | 12 | 0 | I am running development web server in Google App Engine Launcher without any troubles.
But I can't successfully stop it. When I am press Stop button, nothing happens.
Nothing adds in logs after pressing Stop.
And after that I can't close launcher.
The only way to close launcher is Task Manager.
Although when I am using dev_appserver.py myapp via cmd it is successfully stopped by Ctrl+C.
By the way I am under proxy. | Can't stop web server in Google App Engine Launcher | 0 | 0 | 0 | 981 |
25,671,624 | 2014-09-04T17:30:00.000 | 0 | 0 | 0 | 0 | python,c++,include,swig | 25,690,665 | 1 | false | 0 | 1 | There isn't a builtin way to generate the .i file automatically, or to populate the .i with all .h in a folder and all "other .h to make it work properly"
Part of the reason is that "working properly" is completely arbitrary: if A is in that set of headers you want to export, and A derives from B, do you need to export B "for it to work properly"? Not at all. But you might want to. And even if A returns an instance of B, do you need to export B? No (so don't need import B.h in .i), the object returned by SWIG will be an opaque handle to the B instance, and although you won't be able to call any methods on it, you will be able to give it as argument to functions that accept B as a parameter.
The most practical is to write a batch or python script to find a base set of the .h you want (a one liner if all in same folder), copy/paste into your .i, and manually cleanup so .i only contains what files you really want to export to target language, and add any missing ones to get certain features you want (like instantiate base classes etc). You can say "no I don't want to have to customize" all you want, you don't have a choice. | 1 | 3 | 0 | Basically, I have a large existing code base and I want to wrap all of the .h files in one particular directory using SWIG. Many of the classes in these .h files inherit from other classes defined elsewhere in the directory tree, and it would be a pain to track down each one of them by hand. Is there any way to get SWIG to automatically include these or to at least automate the creation of the .i file? I don't want to wrap any classes outside of my own code (such as the standard library), but these would end up being included if I used the -importall option. | How can I wrap many .h files with SWIG and include any dependencies? | 0 | 0 | 0 | 734 |
25,681,118 | 2014-09-05T07:58:00.000 | 0 | 1 | 0 | 0 | python,emacs,comments,abbreviation,python-mode | 25,692,694 | 1 | false | 0 | 0 | IIRC this was a bug in python.el and has been fixed in more recent versions. Not sure if the fix is in 24.3 or in 24.4.
If it's still in the 24.4 pretest, please report it as a bug. | 1 | 0 | 0 | I'm trying to use abbrev-mode in python-mode.
Abbrevs work fine in python code but doesn't work in python comments.
How can I make them work in python comments too?
Thanks. | Emacs: how to make abbrevs work in python comments? | 0 | 0 | 0 | 67 |
25,682,609 | 2014-09-05T09:23:00.000 | 1 | 0 | 1 | 0 | python,linux,virtual-machine,virtualenv,apt-get | 25,682,708 | 1 | true | 0 | 0 | I think you're getting a bit confused about what virtualenv does. It is only for isolating Python files and libraries (those you install with pip install). It does nothing for your operating system files (those you install with apt-get).
If you want to create a re-usable container of operating system files (with apt-get) then look instead at something like Docker. | 1 | 0 | 0 | I have a virtual environment I have created on an Ubuntu virtal machine I am hosting on a windows PC. I intend to replicate my virtual machine in my virtal environment on the virtual machine. However, when trying to install modules to the VE I get a messgae saying that they are already installed - they're not installed in the VE but are on the VM. I thought when set active the VE would have no context of the VM which hosts it?
I have downloaded virtualenvironment sudo pip install virtualenv and then created a virtual environment sudo virtualenv virtual_environment. I then set the virtual environment to active source virtual_environment/bin/activate
When I try and do an apt-get install I get the message 0 upgraded, 0 newly installed, 0 to remove and 202 not upgraded despite the fact I have no modules whatsoever on the VE.
What am I doing wrong?
Thanks! | Cannot Install into Virtual Environment | 1.2 | 0 | 0 | 59 |
25,683,758 | 2014-09-05T10:26:00.000 | 0 | 0 | 0 | 1 | python | 26,286,915 | 1 | false | 1 | 0 | Try: C:\Users\\appdata\roaming\appcelerator\
that is where I found it. I had the same problem. I also just put aptan into the search input field and let the system do its thing. | 1 | 0 | 0 | But I can't find it where it is installed. It isn't even listed at start menu, it can't be found at Program Files (64 bit) and also in Program Files (x86). I repaired installation but again no way to find. | I just downloaded Aptana Studio 3 in Windows 8.1 Pro with Java SDK | 0 | 0 | 0 | 576 |
25,690,621 | 2014-09-05T16:55:00.000 | 0 | 0 | 0 | 1 | python,windows,batch-file,scheduled-tasks | 52,074,147 | 2 | false | 0 | 0 | Same situation: taks -> batch script -> Python process -> subprocess(es), but on Windows Server 2012
I worked around the problem by providing the absolute path to the script/exe in subprocess.Popen.
I verified the environment variables available inside the Python process and the script/exe is in a directory on the PATH. Still, Windows gives a FileNotFoundError unless I provide the absolute path.
Strangely, calling the same script/exe from the batch script is possible without providing its absolute path. | 1 | 3 | 0 | I noticed a rather interesting problem the other day.
I have a windows scheduled task on Windows server 2008 RT. This task runs a batch file which runs a python script I've built. Within this python script there is a subprocess.Popen call to run several other batch files. However for the past couple days I've noticed that the task has successfully run however the secondary batch files did not. I know the python script ran successfully due to the logs it created and all the files it makes that the secondary batch files use are all there. However the completed files are not.
If I just run the batch file by itself everything works perfectly. Does Microsoft's task scheduler not allow a program to open additional batch files and is there a workaround for this? | Windows Task Scheduler not allowing python subprocess.popen | 0 | 0 | 0 | 1,475 |
25,693,870 | 2014-09-05T20:56:00.000 | 1 | 0 | 0 | 0 | python,numpy,parallel-processing,scipy,scikit-learn | 25,750,072 | 1 | true | 0 | 0 | Indeed BLAS, or in my case OpenBLAS, was performing the parallelization.
The solution was to set the environment variable OMP_NUM_THREADS to 1.
Then all is right with the world. | 1 | 4 | 1 | I am running some K-Means clustering from the sklearn package.
Although I am setting the parameter n_jobs = 1 as indicated in the sklearn documentation, and although a single process is running, that process will apparently consume all the CPUs on my machine. That is, in top, I can see the python job is using, say 400% on a 4 core machine.
To be clear, if I set n_jobs = 2, say, then I get two python instances running, but each one uses 200% CPU, again consuming all 4 of my machine's cores.
I believe the issue may be parallelization at the level of NumPy/SciPy.
Is there a way to verify my assumption? Is there a way to turn off any parallelization in NumPy/SciPy, for example? | Inspecting or turning off Numpy/SciPy Parallelization | 1.2 | 0 | 0 | 525 |
25,694,785 | 2014-09-05T22:24:00.000 | 1 | 0 | 0 | 0 | python,safari,selenium-webdriver | 26,345,400 | 1 | false | 1 | 0 | Darth,
Mac osascript has libraries for Python. Be sure to 'import os' to gain access to the Mac osascript functionality.
Here is the command that I am using:
cmd = """
osascript -e 'tell application "System Events" to keystroke return'
"""
os.system(cmd)
This does a brute force return. If you're trying to interact with system resources such as a Finder dialog, or something like that, make sure you give it time to appear and go away once you interact with it. You can find out what windows are active (as well as setting Safari or other browsers 'active', if it hasn't come back to front) using Webdriver / Python.
Another thing that I have to do is to use a return call after clicking on buttons within Safari. Clicks are a little busted, so I will click on something to select it (Webdriver gets that far), then do an osascript 'return' to commit the click.
I hope this helps.
Best wishes,
-Vek
If this answer appears on ANY other site than stackoverflow.com, it is without my authorization and should be reported | 1 | 1 | 0 | I am needing to use the ENTER key in Safari. Turns out Webdriver does not have the Interactions API in the Safari driver. I saw some code from a question about this with a java solution using Robot, and was wondering if there is a purely Python way to do a similar thing. | Getting Around Webdriver's Lack of Interactions API in Safari | 0.197375 | 0 | 1 | 95 |
25,697,623 | 2014-09-06T06:45:00.000 | 0 | 0 | 0 | 0 | python,soap,suds | 27,094,874 | 1 | false | 0 | 0 | Yes this is possible and seems to be used in different "fixer" implementations to take care of buggy servers. Basically you should write a MessagePlugin and implement the sending method. | 1 | 0 | 0 | Is there any method to return the SOAP Request XML before triggering a call to SOAP method in Suds library?
The client.last_sent() returns request XML after triggered the call. But I want to see before triggering the call. | Request XML from SUDS python | 0 | 0 | 1 | 219 |
25,699,308 | 2014-09-06T10:28:00.000 | 0 | 0 | 0 | 1 | java,python,memory,vnc | 25,699,358 | 1 | true | 1 | 0 | directly access system display memory on Linux
You can't. Linux is a memory protected virtual address space operating system. Ohh, the kernel gives you access to the graphics memory through some node in /dev but that's not how you normally implement this kind of thing.
Also in Linux you're normally running a display server like X11 (or in the future something based on the Wayland protocol) and there might be no system graphics memory at all.
I have researched a bit and found that one way to achieve this is to capture the screen at a high frame rate (screen shot), convert it into RAW format, compress it and store it in an ArrayList.
That's exactly how its done. Use the display system's method to capture the screen. It's the only reliable way to do this. Note that if conversion or compression is your bottleneck, you'd have that with fetching it from graphics memory as well. | 1 | 1 | 0 | I am trying to create my own VNC client and would like to know how to directly access system display memory on Linux? So that I can send it over a Socket or store it in a file locally.
I have researched a bit and found that one way to achieve this is to capture the screen at a high frame rate (screenshot), convert it into RAW format, compress it and store it in an ArrayList.
But, I find this method a bit too resource heavy. So, was searching for alternatives.
Please, let me also know if there are other ways for the same (using Java or Python only)? | How to access system display memory / frame buffer in a Java program? | 1.2 | 0 | 0 | 886 |
25,703,792 | 2014-09-06T19:09:00.000 | 0 | 0 | 0 | 0 | python,statistics,classification,computer-science,differentiation | 25,735,046 | 2 | false | 0 | 0 | How about if you difference the data (I.e., x[i+1] - x[i]) repeatedly until all the results are the same sign? For example, if you difference it twice and all the results are nonnegative, you know it's convex. Otherwise difference again and check the signs. You could set a limit, say 10 or so, beyond which you figure the sequence is too complex to characterize. Otherwise, your shape is characterized by the number of times you difference, and the ultimate sign. | 1 | 3 | 1 | This is a bit hard to explain. I have a list of integers. So, for example, [1, 2, 4, 5, 8, 7, 6, 4, 1] - which, when plotted against element number, would resemble a convex graph. How do I somehow extract this 'shape' characteristic from the list? It doesn't have to particularly accurate - just the general shape, convex w/ one hump, concave w/ two, straight line, etc - would be fine.
I could use conditionals for every possible shape: for example, if the slope is positive upto a certain index, and negative after, it's a slope, with the skewness depending on index/list_size.
Is there some cleverer, generalised way? I suppose this could be a classification problem - but is it possible without ML?
Cheers. | Find the 'shape' of a list of numbers (straight-line/concave/convex, how many humps) | 0 | 0 | 0 | 2,040 |
25,704,505 | 2014-09-06T20:34:00.000 | 3 | 0 | 1 | 0 | python,python-2.7,lambda,cython | 25,750,546 | 1 | true | 0 | 0 | Pandas ship with their Cython files precompiled against Cython 0.17.2. The <lambda> variant is newer than that, so was probably compiled against the system's Cython version.
You should probably avoid depending on this. It's not even consistent! Errors, for example, tend to use the lambdaN form even on Cython 0.20.2!
If you have to depend on this, standardise on a version: either use Pandas' precompiled sources everywhere or compile them yourself everywhere.
In order to compile Pandas with the system Python, run python setup.py clean to remove the prebuilt .c files. | 1 | 3 | 0 | I have found out that on my PC, a certain method is represented as <cyfunction <lambda> at 0x06DD02A0>, while on a CentOS server, it's <cyfunction lambda1 at 0x1df3050>. I believe this is the cause for a very obscure downstream error with a different package.
Why is it different? What is its meaning? Can I turn one to the other?
Details: I see this when looking at pandas.algos._return_false. Both PC and server has python 2.7.6, same version of pandas (0.14.1), and cython 0.20.2. The PC is running Win 7, server is CentOS 6.5. | cython lambda1 vs. | 1.2 | 0 | 0 | 312 |
25,704,760 | 2014-09-06T21:07:00.000 | 1 | 0 | 1 | 0 | python | 25,704,932 | 1 | true | 0 | 0 | I tried to copy the text file into a list, and then iterate over it, each time writing the entire file. Is there a better solution?
No, there isn't really any vastly better solution. You need move around all the data following the line anyway, since you're adding or removing text in the middle of the file.
In the best case, you could optimize your program to only move the data after the line you're currently editing, but that optimization is going to be rather minor anyway, and still not effect the actual scaling of the program as a whole, so I'd argue it isn't worth it.
Certainly, the actual running of the program is going to be vastly more expensive than just writing the file anyway, so optimizing that part doesn't matter much. :) | 1 | 0 | 0 | I'd like to take a text file, and for each line, try to comment that line (write to file),
check whether an external script works, and if not, uncomment it. Finally, this should result in a txt file with unnecessary code lines commented out.
I tried to copy the text file into a list, and then iterate over it, each time writing the entire file. Is there a better solution? | Comment/uncomment particular txt line in python | 1.2 | 0 | 0 | 161 |
25,705,280 | 2014-09-06T22:18:00.000 | 1 | 0 | 0 | 0 | python,web-services,download | 25,705,332 | 1 | true | 0 | 0 | If you're generating the content on the fly, you could infer the client's platform by introspecting the User-Agent string on the incoming HTTP header and then treat newlines accordingly.
That is, if the User-Agent string indicates the client is using Windows, use CR + LF (\r\n) and if it indicates a *nix platform (Linux, OS X, etc.) then use just LF (`\r').
Edit in response to comment from OP:
The type can also be supplied in the HTTP headers. That's via the header Content-Type. The value text/plain indicates a regular text file, and this is the default value for this header field.
However, there is nothing that tells the browser to "adapt newline format". The handling of raw text would be a combination of browser and platform specific. In some cases it might trigger a launch dialog, as well, to, say, load it in a text editor.
For a file downloaded on a given platform, the bigger question would be the support outside the browser, I think. i.e. in text editors and such, so it would be better to match the platform's preferred newline type in that case if you're dynamically generating the content.
If it's something solely meant to be rendered in a browser, then it might be worth considering rendering it in HTML, which is standards based (although even there, there can be browser/platform differences).
While there are multiple ways to achieve formatting, a linebreak in HTML can always be represented as <br> and <p></p> to denote a paragraph (usually with space surrounding it on top and bottom), so there's more consistency there. In that case it would be marked up when loading in a regular text editor, though, which may or may not be what you want. | 1 | 0 | 0 | I am writing a web application in Python. I need to generate a file to download. Ideally, the file should consist of one item per line. My question is: how can I produce the correct type of newline characters, so that the downloaded file works well for users running both Windows and Mac/Linux?
Right now, as a stopgap solution, I am separating the items with a space (at least there is a single notion of space on Windows and Linux/Mac!). This is not too bad for my application, but is there a way to do better? | How to produce correct newlines in downloads from python web server | 1.2 | 0 | 0 | 40 |
25,709,205 | 2014-09-07T10:21:00.000 | 0 | 1 | 0 | 0 | python,ide,openerp | 25,716,447 | 1 | false | 0 | 0 | Slow and useless? We have 4-5 devs on this platform and it works well. Could there is a problem with your setup? What do you mean by "tools of python" - I am using the pydev plugin.
My laptop is Win 7 but I use VirtualBox and run an Ubuntu VM with Eclipse with Pydev and postgres in the VM. I find debugging works well and performance is pretty good allowing for the fact it is a laptop, certainly good enough for most things. The biggest hold up is usually adding/removing columns to tables with a lot of rows as Postgres actually creates a new table in the background and copies the rows across into the new table, but this is a Postgres effect and will be the same no matter what.
There are no dedicated tools that I am aware of. I think most people work like this but there is always pycharm or komodo or even go the pure editor way with sublime. | 1 | 1 | 0 | I want know if some IDEs or tools exists who can help to code and debug Modules of OpenERP 7 i tried tools of Python in Eclipse but it still slow and useless.
There is some powerful tools dedicated for OpenERP developers ? | Tools for Coding and Debbugging Modules of Open ERP | 0 | 0 | 0 | 131 |
25,710,724 | 2014-09-07T13:32:00.000 | -2 | 1 | 1 | 0 | python,python-2.7,sys.path | 25,710,953 | 2 | false | 0 | 0 | There is no way to delete pth file.
Standard python will search for pth files in /usr/lib and /usr/local/lib.
You can create isolated python via viartualenv though. | 1 | 3 | 0 | My situation is as follows:
I have a locally installed version of python. There exists also a global one, badly installed, which I do not want to use. (I don't have admin priviliges).
On /usr/local/lib/site-packages there is a x.pth file containing a path to a faulty installation of numpy
My PYTHONPATH does not have any of those paths. However, some admin generated script adds /usr/local and /usr/local/bin to my PATH (this is an assumption, not a known fact).
This somehow results in the faulty-numpy-path added to my sys.path. When I run python -S, it's not there.
site.PREFIXES does not include /usr/local. I have no idea why the aforementioned pth file is loaded.
I tried adding a pth file of my own, to the local installation's site-packages dir, doing import sys; sys.path.remove('pth/to/faulty/numpy') This fails because when that pth file is loaded, the faulty path is not yet in sys.path.
Is there a way for me to disable the loading of said pth file, or remove the path from sys.path before python is loaded?
I've tried setting up virtualenv and it does not suit my current situation. | prevent python from loading a pth file | -0.197375 | 0 | 0 | 961 |
25,712,611 | 2014-09-07T16:52:00.000 | 3 | 0 | 0 | 0 | python,sql,sqlite | 25,712,762 | 1 | false | 0 | 0 | Python's sqlite3 module executes the statement with the list values in the correct order.
Note: if the code already knows the to-be-generated ID value, then you should insert this value explicitly so that you get an error if this expectation turns out to be wrong. | 1 | 3 | 0 | In Python, I'm using SQLite's executemany() function with INSERT INTO to insert stuff into a table. If I pass executemany() a list of things to add, can I rely on SQLite inserting those things from the list in order? The reason is because I'm using INTEGER PRIMARY KEY to autoincrement primary keys. For various reasons, I need to know the new auto-incremented primary key of a row around the time I add it to the table (before or after, but around that time), so it would be very convenient to simply be able to assume that the primary key will go up one for every consecutive element of the list I'm passing executemany(). I already have the highest existing primary key before I start adding stuff, so I can increment a variable to keep track of the primary key I expect executemany() to give each inserted row. Is this a sound idea, or does it presume too much?
(I guess the alternative is to use execute() one-by-one with sqlite3_last_insert_rowid(), but that's slower than using executemany() for many thousands of entries.) | Python SQLite executemany() always in order? | 0.53705 | 1 | 0 | 444 |
25,712,856 | 2014-09-07T17:19:00.000 | 1 | 0 | 0 | 0 | python,size,pixels,turtle-graphics | 35,849,771 | 4 | false | 0 | 0 | I know exactly what you mean, shapesize does not equal the width in pixels and it had me buggered for a day or 2.
I ended up changing the turtle shape to a square and simply using print screen in windows to take a snap shot of the canvas with the square turtle in the middle, then took that print screen into Photoshop to zoom right up to the square until i could see the pixel grid, i found that the default size of the square turtle is 21x21 pixels, who knows why they made it 21 but i found if i take some solid value i want the turtle to be in pixels like 20 and divide that by the 21 i get 0.9523...
Rounding the value to 0.95 and putting that into the shapesize gave me a square turtle of exactly 20x20 pixels and made it much easier to work with, you can do this with any number you want the turtle to be in pixels but i have only tried it with the square turtle.
I hope that helps somewhat or gives you an idea of how to find the turtle size in pixels, ms-paint will do the same providing you goto the view tab and turn on grid then zoom in as far as you can. | 1 | 1 | 0 | I have got a problem. I want to get the pixeled size of my turtle in Python, but how do I get it? | Python: Turtle size in pixels | 0.049958 | 0 | 0 | 8,354 |
25,713,194 | 2014-09-07T17:58:00.000 | 13 | 0 | 0 | 1 | javascript,python,c,google-app-engine,browser | 51,140,813 | 3 | false | 1 | 0 | Old question but for those that land in here in 2018 it would be worth looking at Web Assembly. | 1 | 6 | 0 | I've spent days of research over the seemingly simple question: is it possible to run C code in a browser at all? Basically, I have a site set up in Appengine that needs to run some C code supplied by (a group of trusted) users and run it, and return the output of the code back to the user. I have two options from here: I either need to completely run the code in the browser, or find some way to have Python run this C code without any system calls.
I've seen mixed responses to my question. I've seen solutions like Emscripten, but that doesn't work because I need the LLVM code to be produced in the browser (I cannot run compilers in AppEngine.) I've tried various techniques, including scraping from the output page on codepad.org, but the output I will produce is so high that I cannot use services like codepad.org because they trim the output (my output will be ~20,000 lines of about 60 characters each, which is trimmed by codepad due to a timeout). My last resort is to make my own server that can serve my requests from my Appengine site, but that seems a little extreme.
The code supplied by my users will be very simple C. There are no I/O or system operations called by their code. Unfortunately, I probably cannot simply use a find/replace operation in their code to translate it to Javascript, because they may use structures like multidimensional arrays or maybe even classes.
I'm fine with limiting my users to one cross-platform browser, e.g. Chrome or Firefox. Can anyone help me find a solution to this question? I've been baffled for days. | Running C in A Browser | 1 | 0 | 0 | 8,593 |
25,714,046 | 2014-09-07T19:40:00.000 | 1 | 0 | 1 | 0 | python,numpy | 25,714,220 | 3 | false | 0 | 0 | Yes, it is safe to delete it if your input data consists of a list. From the documentation No copy is performed (ONLY) if the input is already an ndarray. | 1 | 3 | 1 | I have a very long list of list and I am converting it to a numpy array using numpy.asarray(), is it safe to delete the original list after getting this matrix or does the newly created numpy array will also be affected by this action? | does numpy asarray() refer to original list | 0.066568 | 0 | 0 | 1,721 |
25,714,531 | 2014-09-07T20:36:00.000 | 2 | 0 | 1 | 0 | python,nltk | 25,714,754 | 4 | false | 0 | 0 | Use soundex or double metaphone to find out if they rhyme. NLTK doesn't seem to implement these but a quick Google search showed some implementations. | 1 | 21 | 1 | I have a poem and I want the Python code to just print those words which are rhyming with each other.
So far I am able to:
Break the poem sentences using wordpunct_tokenize()
Clean the words by removing the punctuation marks
Store the last word of each sentence of the poem in a list
Generate another list using cmudict.entries() with elements as those last words and their pronunciation.
I am stuck with the next step. How should I try to match those pronunciations? In all, my major task is to find out if two given words rhyme or not. If rhyme, then return True, else False. | Find rhyme using NLTK in Python | 0.099668 | 0 | 0 | 14,269 |
25,715,940 | 2014-09-08T00:10:00.000 | 3 | 0 | 0 | 0 | python,django,macos,python-3.x | 25,715,948 | 1 | true | 1 | 0 | You need to install django for python 3, pip3 install django | 1 | 2 | 0 | I have installed the latest versions of both django and python. The default "python" command is set to 2.7; if I want to use python 3, I have to type "python3".
Having to type "python3" and a django command causes problems. For example if I type: "python3 manage.py migrate" , I get an error. The error is:
Traceback (most recent call last):
File "manage.py", line 8, in
from django.core.management import execute_from_command_line
ImportError: No module named 'django'
Django does not seem to recognize my python 3. How do I get around this? Your help is greatly appreciated. | Configuring Django 1.7 and Python 3 on mac osx 10.9.x | 1.2 | 0 | 0 | 344 |
25,720,607 | 2014-09-08T08:59:00.000 | 2 | 0 | 1 | 0 | python,python-3.x | 25,721,351 | 1 | true | 0 | 0 | I think its to do with the way the GUI opens and displays the file, as i believe IDLE loads the whole thing into RAM when you open it, so the only way to get around that without changing your file would be to add more RAM, a more sensible approach would be to break up your file into smaller chunks, then add import statements at the top of the main file.
This also means that your file can be made easier to work with (assuming you break it up into sensible chunks where the functions are related in each file)
if you are worried about what will happen if you try to compile/package it up for use on other computers, i do this all the time with cx_freeze with no problems.
James | 1 | 2 | 0 | I'm running Vista SP2 : Python 3.3.5
I have a fairly large .py file (~11k lines) which I'm working on. I recently installed pyscripter and have been using it without issue for a week or so. But yesterday I went into pyscripter and as soon as I added a single new character to the file I got an "Out Of Memory" error. Sure enough it had maxed out all 2GB of ram on my laptop. I tried opening the file in IDLE, and although I could edit the file it would freeze up whenever I tried to run it. However in PythonWin the file opens and runs just fine. I commented out the function that I had last changed wondering if my code was causing the issue, but pyscripter and IDLE are still acting the same.
Anyone experience this before? Any ideas? | Strange IDE behaviour with a python file | 1.2 | 0 | 0 | 62 |
25,721,518 | 2014-09-08T09:50:00.000 | 0 | 0 | 1 | 0 | c#,debugging,ironpython,scripting-language | 28,464,040 | 1 | false | 0 | 0 | I'm pretty sure it moved into the core with the rest of the DLR, it's no longer special to IronPython. But I came across this question while looking for info on debugging so I may be completely wrong. | 1 | 0 | 0 | since version 2.7, the Microsoft.Scripting.Debugging.dll is not shipped with IronPython any more.
Where can I find it - or is there an alternative if I want to implement single stepping for IronPython?
All example implementations that I found are from 2010 or older, using the IronPython version with included Microsoft.Scripting.Debugging.dll.
Alex | Where can I find the Microsoft.Scripting.Debugging.dll? | 0 | 0 | 0 | 611 |
25,721,841 | 2014-09-08T10:08:00.000 | -2 | 0 | 1 | 0 | python,python-2.7,pycharm | 25,721,944 | 6 | false | 0 | 0 | Don't want to answer this for you as it's simple enough to work out yourself.
But if I were you I'd use the string.find() method which takes the string you're looking for and the position to start looking from, combined with a while loop which uses the result of the find method as it's condition in some way.
That should in theory give you the answer. | 1 | 4 | 0 | Say I have string = 'hannahannahskdjhannahannah' and I want to count the number of times the string hannah occurs, I can't simply use count, because that only counts the substring once in each case.
Ie.
I am expecting to return 4 but only returns 2 when I run this in pyCharm with string.count('hannah') | Python: Count overlapping substring in a string | -0.066568 | 0 | 0 | 5,665 |
25,722,308 | 2014-09-08T10:35:00.000 | 0 | 0 | 0 | 0 | python,django,web | 25,724,888 | 1 | false | 1 | 0 | I think that for this purposes you need openwrt or similar firmware for your router, or as a second solution you can make one of your computers as internet-gateway, so router will get internet from this gateway, and on a gateway there should be an app/config/etc. , which redirect user to your app, when user firstly open any page. | 1 | 0 | 0 | I Would like to write a little django web-app to be run in my local WLAN, to allow my customers to browse thru the offers that I made available.
The WLAN is not password protected and isolated from web.
Ideally, I would like that when a user connect to my wlan with a smartphone or tablet, he or she is been jumped directly to the offer webserver, without entering any address or url.
Is there any combination of port forwarding/triggering on the wlan router and the webserver that can accomplish this task ? | autologon to a django web app | 0 | 0 | 0 | 77 |
25,723,993 | 2014-09-08T12:12:00.000 | 0 | 0 | 1 | 0 | visual-studio-2010,error-handling,console-application,ironpython | 27,057,999 | 1 | true | 0 | 0 | Set subsystem::console in the project properties and using the 'Ctrl+F5' to execute a script. | 1 | 0 | 0 | In Visual Studio 10 IronPython console disappears before I can read the error.
When I surround this code by try/except-block I do not get the text of error at all.
How must I act in this case?
Thank you in advance! | In Visual Studio 10 IronPython console disappears before I can read the error | 1.2 | 0 | 0 | 39 |
25,725,702 | 2014-09-08T13:43:00.000 | 0 | 0 | 0 | 0 | python,pyscripter | 30,013,579 | 2 | false | 1 | 1 | Click on the windows logo, write %appdata%, then open "Roaming" | 1 | 1 | 0 | I have installed it using the executable file downloaded from its webpage.
I tried finding %AppData%\skins\ as suggested by blogs but I just couldn't find it. Has anybody been stuck here? | Changing theme in Pyscripter? | 0 | 0 | 0 | 2,831 |
25,728,378 | 2014-09-08T16:03:00.000 | 0 | 0 | 0 | 0 | python,django,task,customization,celery | 25,728,619 | 1 | false | 1 | 0 | As you know how to create & execute tasks, it's very easy to allow customers to create tasks.
You can create a simple form to get required information from the user. You can also have a profile page or some page where you can show user preferences.
The above form helps to get data(like how frequently he needs to receive eamils) from user. Once the form is submitted, you can trigger a task asynchronously which will does processing & send emails to customer. | 1 | 0 | 0 | I am new to Celery and I can't figure out at all how I can do what I need. I have seen how to create tasks by myself and change Django's file settings.py in order schedule it. But I want to allow users to create "customized" tasks.
I have a Django application that is supposed to give the user the opportunity to create the products they need (files that gather weather information) and to send them emails at a frequency they choose.
I have a database that contains the parameters of every product like geographical time, other parameters and frequency. I also have a script sendproduct.py that sends the product to the user.
Now, how can I simply create for each product, user created a task that execute my script with the given parameters and the given frequency? | How the user of my Django app can create "customized" tasks? | 0 | 0 | 0 | 32 |
25,729,473 | 2014-09-08T17:07:00.000 | 2 | 0 | 1 | 0 | python,eclipse,pydev | 25,751,163 | 1 | true | 0 | 0 | This is not currently implemented in PyDev (although it's already in the TODO list).
Still, you may want to check the EditBox plugin, which may be useful for indentation guides. | 1 | 2 | 0 | I'm new to Python, and just start using PyDev in Eclipse.
Is there a way to highlight the range of my indentation? i.e., show the level of current indent. Like for Java, when click at the beginning of a curly brace ({}), the Eclipse will highlight (bold) the end curly brace. I wonder if there're similar functions for Python.
Thanks! | How to highlight python indentation in Eclipse | 1.2 | 0 | 0 | 95 |
25,731,886 | 2014-09-08T19:40:00.000 | 1 | 0 | 0 | 0 | python,ssh | 25,732,718 | 1 | true | 0 | 0 | Have you considered using tmux/screen? They have lots of features and can help you detach a terminal and re-attach to it at a later date without disrupting the running process. | 1 | 0 | 0 | I have an external server that I can SSH into. It runs bots on reddit.
Whenever I close the terminal window the bot is running in, the process stops, which means the bot stops as well.
I've tried using
nohup python mybot.py
but it doesn't work - when I close the window and check the processes (ps -e), python does not show up. Are there any alternatives to nohup? Ideally, that print the output to the terminal, instead of an external file. | Keeping a program going on an external server | 1.2 | 0 | 1 | 45 |
25,736,407 | 2014-09-09T03:48:00.000 | 1 | 0 | 1 | 0 | python,linux,ubuntu,python-3.x,restart | 25,736,461 | 2 | true | 0 | 0 | You can catch BaseException class. It is the basic class for all errors so you will process them all.
If you want keep program running when smth really nasty happens, like memory leak or segmentation fault, you should write a watchdog. Watchdog is a program that checks process with specified pid running and if not, restarts it. | 1 | 1 | 0 | In Python I often use try-except blocks to except certain conditions. However, unexpected errors could potentially be raised and I can't account for all of them. How would I go about restarting a Python program when it stops running inside of an environment such as Linux? | How to make a program restart after an error forces it to stop? | 1.2 | 0 | 0 | 900 |
25,736,956 | 2014-09-09T04:56:00.000 | 1 | 0 | 0 | 0 | python,django | 25,739,277 | 1 | true | 1 | 0 | There is absolutely no problem with storing CSS in DB. Just create a TextField in your model and put it there.
Then in your view's template output it in a <style type="text/css"> tag and that's all. | 1 | 0 | 0 | Hi I have a scenario where user inputs css data in text box, I need to read it and apply it to a django view. I also need to store the css data for future modifications by user.
So my question is
1.should I store the css data in database or
2.in a static css file and store path to file in db?
Thanks. | Store css data in django db | 1.2 | 0 | 0 | 278 |
25,739,840 | 2014-09-09T08:23:00.000 | 2 | 0 | 0 | 0 | php,python,django | 25,740,202 | 3 | true | 1 | 0 | You must use one of these possibilities:
Your friend gives you direct access (even only read access) to his database and you represent everything as Django models or use raw SQL. The problem with that approach is that you have a very high-coupling between the two systems. If he changes his table or scheme structure for some reason you will also have to be notified and change stuff on your end. Which is a real headache.
Your friend provides an API end-point from his system that you can access. This protocol can be simple GET requests to retrieve information that return JSON or any other format that suites you both. That's the simplest and best approach for the long run.
You can "fetch" content directly from his site, that returns raw HTML for every request, and then you can scrape the response you receive. That's also a headache in case he changes his site structure, and you'll need to be aware of that. | 1 | 0 | 0 | I am building a web app in django and I want to integrate it with the php web app that my friend has build.
Php web app is like forum where students can ask question to the teachers. For this they have to log in.
And I am making a app in django that displays a list of colleges and every college has information about teachers like the workshop/classes timing of the teachers. In my django app colleges can make their account and provide information about workshop/classes of the teachers.
Now what I want is the student that are registered to php web app can book for the workshop/classes provided by colleges in django app and colleges can see which students and how many students have booked for the workshop/classes.
Here how can I get information of students from php web app to django so that colleges can see which students have booked for workshop. Students cannot book for workshop untill they are logged in to php web app.
Please give me any idea about this.. How can I make this possible | passing user information from php to django | 1.2 | 0 | 0 | 274 |
25,740,355 | 2014-09-09T08:52:00.000 | 0 | 0 | 0 | 0 | python,sql-server | 25,743,680 | 1 | false | 0 | 0 | The BCP API is only available using the ODBC call-level interface and the managed SqlClient .NET API using the SqlBulkCopy class. I'm not aware of a Python extension that provides BCP API access.
You can insert many rows in a single transaction to improve performance. This can be accomplished by batching individual insert statements or by passing multiple rows at once using an XML parameter (which also reduces round-trips). | 1 | 1 | 0 | After scanning the very large daily event logs using regular expression, I have to load them into a SQL Server database. I am not allowed to create a temporary CSV file and then use the command line BCP to load them into the SQL Server database.
Using Python, is it possible to use BCP streaming to load data into SQL Server database? The reason I want to use BCP is to improve the speed of the insert into SQL Server database.
Thanks | Loading Large data into SQL Server [BCP] using Python | 0 | 1 | 0 | 1,625 |
25,742,903 | 2014-09-09T10:58:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,shapefile,bounding-box | 25,744,305 | 2 | false | 0 | 1 | Find the points/vertices that describe the line at the left edge - (x1, y1), (x2, y2)
Add a constant to the x values - (x1+k, y1), (x2+k, y2)
Find the y values on the polygon at the new x values - (x1+k, y3), (x2+k, y4)
Draw the line between those two points. | 1 | 0 | 0 | I am working with a GIS problem using a single input of a polygon shapefile.
Consider an irregular polygon. I want to draw vertical lines across the extent of the polygon at equal spacing.
How I intend to proceed is:
Identify the bounding box (done using PyShp)
Draw vertical Lines parallel to the left edge of the bounding box at equal spacing (How?)
Clip the lines to the extent of the polygon (How, without using ArcPy?)
Note: They are required to be only vertical, and not a graticule. Also, I do not intend to use ArcPy, and intend to perform the coding completing in Python (2.7) as this segment of code needs to go into a tool generated from PyQt. | Vertical lines in a polygon shapefile | 0 | 0 | 0 | 1,014 |
25,746,419 | 2014-09-09T13:51:00.000 | 2 | 0 | 0 | 1 | java,python,google-app-engine,module,google-cloud-datastore | 26,067,953 | 1 | false | 1 | 0 | You might be able to do something similar by using "appscale" (an open source project that could be able to help you, if you setup Virtual Box and load the image on it). Look at community.appscale.com
Another way (mind you, this is tricky) would be to :
1- deploy your python as a standalone project on localhost:9000
2- deploy your java as a standalone project on localhost:8000
3- Change your python and java code so that when they are in Dev, they hit the right localhost (java
hits localhost:9000 and python hits localhost:8000)
4- Try, like @tx802 suggested, to specify a path to local_db.
I am not sure either method works, but I figure they are both worth trying at the very least. | 1 | 4 | 0 | I have a multi-module GAE Application that is structured like this:
a Python27 module, that is a regular web application. This Python app uses the Datastore API. Regular, boring web app.
a Java module (another web application) that hooks on the Datastore calls (calls made by the Python web app), and displays aggregated data about the recorded Datastore calls.
I have been able to deploy this application on the GAE cloud, and everything works fine.
However, problems arise when I want to run my application on localhost.
The Python module must be started using the Python SDK. The Java module must be started using the Java SDK.
However, the 2 SDK's do not seem to share the same datastore (I believe the 2 SDKs write/read to separate files on disk).
It seems to me that the 2 SDK's also differ in the advancement of the Development Console implementation.
The Python SDK sports a cleaner, more "recent-looking" Development Console (akin to the new console.developers.google.com console) than the Java SDK, which has the old-looking version of the Development Console (akin to the old appspot.com console)
So my question is, is there a way to boot 2+ modules (in different languages: Python, Java) that share the same Datastore files? That'd be nice, since it would allow the Java module to hook on the Python Datastore calls, which does not seem to be possible at the moment. | GAE multi-module, multi-language application on localhost | 0.379949 | 0 | 0 | 503 |
25,747,192 | 2014-09-09T14:28:00.000 | 0 | 0 | 0 | 0 | python,multithreading,postgresql,sqlite | 25,748,935 | 2 | false | 0 | 0 | I used method 1 before. It is the easiest in coding. Since that project has a small website, each query take only several milliseconds. All the users requests can be processed promptly.
I also used method 3 before. Because when the query take longer time, it is better to queue the queries since frequent "detect and wait" makes no sense here. And would require a classic consumer-producer model. It would require more time to code.
But if the query is really heavy and frequent. I suggest look to other db like MS SQL/MySQL. | 1 | 4 | 0 | I've got a sqlite3 database and I want to write in it from multiple threads. I've got multiple ideas but I'm not sure which I should implement.
create multiple connection, detect and waif if the DB is locked
use one connection and try to make use of Serialized connections (which don't seem to be implemented in python)
have a background process with a single connection, which collects the queries from all threads and then executes them on their behalft
forget about SQlite and use something like Postgresql
What are the advances of these different approaches and which is most likely to be fruitful? Are there any other possibilities? | Writing in SQLite multiple Threads in Python | 0 | 1 | 0 | 1,466 |
25,747,431 | 2014-09-09T14:38:00.000 | 0 | 0 | 0 | 0 | python,sockets,client | 25,748,541 | 1 | true | 0 | 0 | In order to connect the browser to the server (your app ) your url should look like this
127.0.0.1:12397 . i am just not sure how are you planning to send the GET request , but now the request will be intercepted by your app ONCE SENT. | 1 | 0 | 0 | I need to build a server which can connect to the chrom browser (The browser need to be a client) in TCP protocol, and get from the browser some URL and check if the file it recieved exist in my computer. This code below is work only I use a regular client that I build, and not a browser client.
My question is: How and where do I can send data from the client browser to the server? How do I connect to the browser from the server? | Using the browser as a client in python sockets | 1.2 | 0 | 1 | 381 |
25,748,396 | 2014-09-09T15:23:00.000 | 7 | 0 | 0 | 0 | python,django,django-admin | 25,763,182 | 5 | true | 1 | 0 | Worked it out - I set admin.site.index_template = "my_index.html" in admin.py, and then the my_index template can inherit from admin/index.html without a name clash. | 1 | 4 | 0 | I wish to make some modifications to the Django admin interface (specifically, remove the "change" link, while leaving the Model name as a link to the page for changes to the instances). I can achieve this by copying and pasting index.html from the admin application, and making the modifications to the template, but I would prefer to only override the offending section by extending the template - however I am unsure how to achieve this as the templates have the same name. I am also open to alternative methods of achieving this effect. (django 1.7, python 3.4.1) | Django Extend admin index | 1.2 | 0 | 0 | 4,302 |
25,749,566 | 2014-09-09T16:21:00.000 | 1 | 1 | 0 | 0 | python,go,rabbitmq | 25,749,840 | 1 | true | 0 | 0 | In order for you to test that all of your messages are published you may do it this way:
Stop consumer.
Enable acknowledgements in publisher. In python you can do it by adding extra line to your code: channel.confirm_delivery(). This will basically return a boolean if message was published. Optionally you may want to use mandatory flag in basic_publish.
Send as many messages as you want.
Make sure that all of the basic_publish() methods returnes True.
Count number of messages in Rabbit.
Enable Ack in consumer by doing no_ack = False
Consume all the messages.
This will give you an idea where your messages are getting lost. | 1 | 0 | 0 | I use Python api to insert message into the RabbitMQ,and then use go api to get message from the RabbitMQ.
Key 1: RabbitMQ ACK is set false because of performance.
I insert into RabbitMQ about over 100,000,000 message by python api,but when I use go api to get
message,I find the insert number of message isn’t equal to the get number.The insert action and the
get action are concurrent.
Key 2:Lost message rate isn’t over 1,000,000 percent 1.
Insert action has log,python api shows that all inserted message is successful.
Get action has log,go api shows that all get message is successful.
But the number isn’t equal.
Question1:I don’t know how to find the place where the message lost.Could anyone give me a suggestion how to find where the message lost?
Question2:Is there any strategy to insure the message not lose? | RabbitMQ message lost | 1.2 | 0 | 0 | 2,544 |
25,752,673 | 2014-09-09T20:00:00.000 | 0 | 0 | 0 | 0 | python,jython-2.5 | 25,753,274 | 1 | false | 0 | 0 | Ok, I just fixed this by doing : ‘/SchServices/api/servicegroup/’+self.id, "id" is a built in function in jython. so self.id is now a instance variable. | 1 | 0 | 0 | how to parameter values in quotes in jython, this is my calling method :
BaseSTSSchedulerTask.init(self, Test(testId, “Get Service Group by ID”), hostPort, ‘/SchServices/api/servicegroup/9999′, HEADERS)
I want to replace the value 9999 with a variable which is returned from another method.,
like id= Data.getID().
I tried doing this ‘/SchServices/api/servicegroup/’+id, but it does not help . Any idea how to handle this ? | Jython: Parameterize values in quotes | 0 | 0 | 0 | 39 |
25,758,201 | 2014-09-10T05:52:00.000 | -1 | 0 | 0 | 0 | python,c,parsing | 25,758,326 | 6 | false | 0 | 0 | I use re module with Match and Search functions. Search will find the text anywere in the string while match starts from the beginning of the string | 1 | 0 | 0 | I have tried parsing files using #include by Python. I have tried to match pattern using sed command. Both these ways I get garbage data. For example, if in some comment I have /* #include "header.h" */ I get those lines as well. How to avoid this? | How to get list of all header files included in C source file? | -0.033321 | 0 | 0 | 1,926 |
25,764,091 | 2014-09-10T11:22:00.000 | 12 | 1 | 1 | 0 | python,python-2.7,python-3.x | 25,764,175 | 2 | true | 0 | 0 | No, Python does not have header files nor similar. Neither does Java, despite your implication that it does.
Instead, we use "docstrings" in Python to make it easier to find and use our interfaces (with the built-in help() function). | 1 | 5 | 0 | Does python require header files like C/C++ ?
What are the differences between including header files and importing packages ? | Does python have header files like C/C++? | 1.2 | 0 | 0 | 30,600 |
25,764,889 | 2014-09-10T11:59:00.000 | 3 | 0 | 0 | 0 | python,django,django-authentication | 25,765,046 | 2 | false | 1 | 0 | You have to consider what exactly means for the users to be "online". Since any user can close the browser window any time and without the server knowing about that action, you'd end up having lots of false "online" users.
You have two basic options:
Keep track of the user's last activity time. Every time the user loads a page you'd update the value of the timer. To get a list of all online users you'd need to select the ones with an activity before X minutes. This is what is done by some web forums.
Open a websocket, long polling connection or some heartbeat to the server. This is what Facebook chat does. You'd need more than just django, since to keep a connection open another kind of server-side resources are needed. | 1 | 2 | 0 | I'm looking for a way to keep track of users that are online/offline. So if I present all users in a list i could have an icon or some kind of flag to show this. Is this built in in Django's default Auth system?
My first thought was to simply have a field in my profiles called last_logout in the models and update it with the date/time each time user logged out.
With this info and the built in last_login I should be able to make some kind of function to determine if the user is loggedin/online right?
Or should I just have a boolean field called "online" that I can change when user logs in and out? | Get list of connected users with Django | 0.291313 | 0 | 0 | 3,712 |
25,767,379 | 2014-09-10T13:57:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,range | 25,767,633 | 6 | false | 0 | 0 | Can I assume that the ranges stored don't leave gaps between them?
I would:
store the mapping as a dict (range_start -> value) just like you did
to get a value for key K:
do a binary search over the dict's keys to find the greatest key smaller or equal to K (O(logN))
return the value for that key (O(1)). | 1 | 4 | 0 | I should keep ranges (with different intervals) and according values like below:
0..5000: 1234
5001..10000: 1231
10001..20000: 3242
...
50001..100000: 3543
100001..200000: 2303
...
How should I store it? As dict like {'0': 1234, '5001': 1231, '10001': 3242, ...}?
Once stored, I will need to search for according value, but it should look for a range - for ex., 6000 should return 1231. If I store it as dict, how should I perform the search?
Upd. There are no gaps in the intervals and number of ranges is quite small (~50). | How to keep ranges with python? | 0.033321 | 0 | 0 | 864 |
25,770,457 | 2014-09-10T16:21:00.000 | 0 | 0 | 0 | 0 | python,django,cookies | 72,094,059 | 2 | false | 1 | 0 | Sadly, there is no best way you can prevent this from what I know but you can send the owner of an account an email and set some type of 2fa. | 1 | 3 | 0 | I'm currently developing a site with Python + Django and making the login I started using the request.session[""] variables for the session the user was currently in, and i recently realized that when i do that it generates a cookie with the "sessionid" in it with a value every time the user logs in, something like this "c1cab412bc71e4xxxx1743344b3edbcc" and if I take that string and paste it in the cookie on other computer in other network and everything, i can have acces to the session without login in.
So what i'm asking here actually is if anyone can give me any tips of how can i add some security on my system or if i'm doing something wrong setting session variables?
Can anyone give me any suggestions please? | Django session id security tips? | 0 | 0 | 0 | 357 |
25,770,740 | 2014-09-10T16:37:00.000 | 0 | 0 | 1 | 0 | python,enthought,canopy | 25,789,479 | 1 | false | 0 | 0 | There was an issue with the current working directory--it's not set by default to where the file is saved. cding into the directory with the module in it fixed it. | 1 | 0 | 0 | I've created some module, and would like to access it through another Python script in Enthought Canopy. When I attempt to do the same thing using python directly through the command line, this works just fine -- I just import myfile.py. Additionally, I know that my default Python distribution on this machine is Enthought Canopy. Anyone know why I'm not able to access the module I've created from within a Python script in the Canopy editor? It just says there is 'No module named myfile', even though myfile.py is in the same directory. | Running a module that I've created out of Enthought Canopy | 0 | 0 | 0 | 235 |
25,772,149 | 2014-09-10T18:05:00.000 | 0 | 0 | 1 | 0 | python,pdb | 26,622,259 | 1 | false | 0 | 0 | The only reason I've encountered so far is that a list comprehension spanning multiple lines can be stepped out of using "unt" as it works now. With my proposal, "unt" would remain in the list comprehension. | 1 | 2 | 0 | I have been using pdb's "unt" command to step over list comprehensions with a single command. This works well unless the list comprehension happens to be at the end of the loop. Then the "unt" command steps over the entire loop.
It seems to me that this is a flaw in the definition of the "unt" command. Is there a reason why it wasn't defined as continuing execution until the current line changes, rather than waiting for it to increase? | python pdb unt at end of loop | 0 | 0 | 0 | 210 |
25,775,880 | 2014-09-10T22:08:00.000 | 0 | 1 | 1 | 0 | python,python-2.7,fipy | 56,643,433 | 3 | false | 0 | 0 | Since I had the same issue with Ubuntu I post this here.
If you have installed it using miniconda or anaconda, it will be:
/home/username/miniconda<version>/envs/<name of the environemnt you installed fipy in>
if you get the error that fipy module not found, you dont need to export the path but you just need to:
conda activate <nameOfEnvironment you installed fipy there> | 1 | 1 | 0 | I have recently installed the FiPy package onto my Macbook, with all the dependencies, through MacPorts. I have no troubles calling FiPy and NumPy as packages in Python.
Now that I have it working I want to go through the examples. However, I cannot find the "base directory" or FiPy Directory in my computer.
How can I find the base directory?
Do I even have the base directory if I have installed all this via Macports?
As a note I am using Python27.
Please, help! Thanks. | Where is the FiPy "base directory"? | 0 | 0 | 0 | 458 |
25,776,832 | 2014-09-10T23:44:00.000 | 1 | 0 | 0 | 1 | python,windows,wget,system-calls,mingw32 | 25,780,451 | 1 | true | 0 | 0 | There's no such thing as under "MinGW". You probably mean under MSYS, a Unix emulation environment for Windows. MSYS makes things look like Unix, but you're still running everything under Windows. In particular MSYS maps /bin to the drive and directory where you install MSYS. If you installed MSYS to C:\MSYS then your MSYS /bin directory is really C:\MSYS\bin.
When you add /bin to your MSYS PATH environment variable, MSYS searches the directory C:\MSYS\bin. When you add /bin to the Windows PATH environment using the command SETX, Windows will look in the \bin directory of the current drive.
Presumably your version of Python is the standard Windows port of Python. Since it's a normal Windows application, it doesn't interpret the PATH environment variable the way you're expecting it to. With /bin in the path, it will search the \bin directory of the current drive. Since wget is in C:\MSYS\bin not \bin of the current directory you an error when trying to run it from Python.
Note that if you run a Windows command from the MSYS shell, MSYS will automatically convert its PATH to a Windows compatible format, changing MSYS pathnames into Windows pathnames. This means you should be able to get your Python script to work by running Python from the MSYS shell. | 1 | 0 | 0 | I am trying to figure out a way to call wget from my python script on a windows machine. I have wget installed under /bin on the machine. Making a call using the subprocess or os modules seems to raise errors no matter what I try. I'm assuming this is related to the fact that I need to route my python system call through minGW so that wget is recognized.
Does anyone know how to handle this?
Thanks | System Call in Python via MINGW32 on Windows | 1.2 | 0 | 0 | 1,827 |
25,778,029 | 2014-09-11T02:20:00.000 | 0 | 1 | 0 | 0 | python,smtp | 25,808,100 | 1 | false | 0 | 0 | It turns out that I was sending to empty recepients. | 1 | 0 | 0 | I'm using Python to send mass emails. It seems that I send too many and too fast that I am getting SMTPRecipientsRefused(senderrs) errors.
I used
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
Any ideas? Thanks! | Getting SMTPRecipientsRefused(senderrs) because of sending too many? | 0 | 0 | 1 | 361 |
25,778,586 | 2014-09-11T03:30:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,google-cloud-datastore | 25,779,584 | 1 | false | 1 | 0 | You should add to each Datastore entity an indexed property to query one.
For example you could create an "hash" property that will contain the date (in ms since epoch) modulo 15 minutes (in ms).
Then you just have to query with a filter saying hash=0, or rather a random value between 0 and 15 min (in ms). | 1 | 0 | 0 | My users can supply a start and end date and my server will return a list of points between those two dates.
However, there are too many points between each hour and I am interested to pick only one random point per every 15 minutes.
Is there an easy to do this in Appengine? | Appengine: Query only a subset of the data? | 0.197375 | 0 | 0 | 37 |
25,780,445 | 2014-09-11T06:22:00.000 | 1 | 1 | 0 | 0 | python,flask,raspberry-pi,qr-code | 25,781,952 | 1 | false | 1 | 0 | First of all, QR codes aren't magic. All they contain is a string of text. That text could say "Hello", or be a phone number, email address, or URL.
It is up to the QR scanner to decide what to do with the text it encounters.
For example, you could build a QR scanner which tells your Pi to delete data when it scans the text "123abc".
Or, you could have a URL like http://192.168.0.34/delete?data=abc123 where the IP address is the internal network address of your Pi. | 1 | 0 | 0 | is it possible to let a web server, for example my Raspberry Pi, start a Script when a specific QR-Code gets scanned from my Mobile Device connected to my Home-Network?
e.x.: I want the Pi to delete spezific data from a Database if the QR-Code gets scanned from a Mobile Device inside my Network. | Starting Script on Server when QR-Code gets scanned | 0.197375 | 0 | 0 | 602 |
25,783,481 | 2014-09-11T09:12:00.000 | 7 | 0 | 0 | 0 | android,python,ios,django | 25,783,845 | 2 | true | 1 | 0 | Django's strength is in it's ORM, huge documentation, and the thousands of reusable applications. The problem with those reusable apps is that the majority is written following Django's MVC design, and as you need a web service, and not a website or web application, most of those apps will be almost useless for you.
On the other hand, there is Django-REST-Framework, extending Django itself, which is pretty good, and it's declarative API feels as if it was part of Django itself. For simple cases just a couple lines of code could produce you a complete CRUD API following REST conventions, generating beautiful URLs, out-of-the box support for multiple authentication mechanisms, etc. but it could be an overkill to pick Django just because of that, especially if you do not wish to use it's ORM.
Flask on the other hand is pretty lightweight, and it's not an MVC-only framework, so in combination with Flask-RESTful, I think it would be an ideal tool for writing REST services.
So a conclusion would be that Django provides the best out-of-the-box experience, but Flask's simplicity and size is too compelling to ignore it. | 2 | 1 | 0 | I know this is a bit off topic, but I really needed some help regarding this.
I am new to Python. I'm trying to build my next project (a dictionary web app which will have both iOS and android app as well) for myself in Python. I've done some research and listed out some promising frameworks.
django
pylons (pyramid + repoze.bfg)
tornado
CherryPy
pyjamas
flask
web.py
etc
But while django is great, it was originally built for newspaper like sites project building. Im stuck with choice making for dictionary like web application which will have to provide RESTful web service api for mobile request handling.
So anyone can you please help in pointing out which framework is the best choice for this type of web app. I think I should go with django. Or should I go with native python coding? Any suggestions will be great. | Python framework choice | 1.2 | 0 | 0 | 252 |
25,783,481 | 2014-09-11T09:12:00.000 | 2 | 0 | 0 | 0 | android,python,ios,django | 25,784,170 | 2 | false | 1 | 0 | Go with Django, ignore its entire templating system(used to generate web pages) and use Django-Tastypie for REST service. Easy to learn and set-up is instant. | 2 | 1 | 0 | I know this is a bit off topic, but I really needed some help regarding this.
I am new to Python. I'm trying to build my next project (a dictionary web app which will have both iOS and android app as well) for myself in Python. I've done some research and listed out some promising frameworks.
django
pylons (pyramid + repoze.bfg)
tornado
CherryPy
pyjamas
flask
web.py
etc
But while django is great, it was originally built for newspaper like sites project building. Im stuck with choice making for dictionary like web application which will have to provide RESTful web service api for mobile request handling.
So anyone can you please help in pointing out which framework is the best choice for this type of web app. I think I should go with django. Or should I go with native python coding? Any suggestions will be great. | Python framework choice | 0.197375 | 0 | 0 | 252 |
25,785,243 | 2014-09-11T10:36:00.000 | 129 | 0 | 1 | 0 | python,python-3.x | 25,787,875 | 1 | true | 0 | 0 | There are two distincts types of 'time', in this context: absolute time and relative time.
Absolute time is the 'real-world time', which is returned by time.time() and which we are all used to deal with. It is usually measured from a fixed point in time in the past (e.g. the UNIX epoch of 00:00:00 UTC on 01/01/1970) at a resolution of at least 1 second. Modern systems usually provide milli- or micro-second resolution. It is maintained by the dedicated hardware on most computers, the RTC (real-time clock) circuit is normally battery powered so the system keeps track of real time between power ups. This 'real-world time' is also subject to modifications based on your location (time-zones) and season (daylight savings) or expressed as an offset from UTC (also known as GMT or Zulu time).
Secondly, there is relative time, which is returned by time.perf_counter and time.process_time. This type of time has no defined relationship to real-world time, in the sense that the relationship is system and implementation specific. It can be used only to measure time intervals, i.e. a unit-less value which is proportional to the time elapsed between two instants. This is mainly used to evaluate relative performance (e.g. whether this version of code runs faster than that version of code).
On modern systems, it is measured using a CPU counter which is monotonically increased at a frequency related to CPU's hardware clock. The counter resolution is highly dependent on the system's hardware, the value cannot be reliably related to real-world time or even compared between systems in most cases. Furthermore, the counter value is reset every time the CPU is powered up or reset.
time.perf_counter returns the absolute value of the counter. time.process_time is a value which is derived from the CPU counter but updated only when a given process is running on the CPU and can be broken down into 'user time', which is the time when the process itself is running on the CPU, and 'system time', which is the time when the operating system kernel is running on the CPU on behalf on the process. | 1 | 99 | 0 | I have some questions about the new functions time.perf_counter() and time.process_time().
For the former, from the documentation:
Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration. It does include time elapsed during sleep and is system-wide. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid.
Is this 'highest resolution' the same on all systems? Or does it always slightly depend if, for example, we use linux or windows?
The question comes from the fact the reading the documentation of time.time() it says that 'not all systems provide time with a better precision than 1 second' so how can they provide a better and higher resolution now?
About the latter, time.process_time():
Return the value (in fractional seconds) of the sum of the system and user CPU time of the current process. It does not include time elapsed during sleep. It is process-wide by definition. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid.
I don't understand, what are those 'system time' and 'user CPU time'? What's the difference? | Understanding time.perf_counter() and time.process_time() | 1.2 | 0 | 0 | 76,339 |
25,791,196 | 2014-09-11T15:15:00.000 | 1 | 0 | 0 | 1 | python,macos,root,pyinstaller | 25,791,380 | 2 | false | 0 | 0 | Running the installer as root will have no effect when you later start the application itself as a normal user.
Try sudo python /path/to/script.py instead.
If that works, then put this into a shell script and run that to start the app as root from now on (and the people who know MacOS can probably tell you how you can create a nice icon for the script).
WARNING Doing this makes your system vulnerable to attacks. If you do this on your own Mac, that's fine. If you're developing a product that you're selling to other people, then you need to revisit your design since it's severely broken. | 1 | 0 | 0 | I'm getting ready to deploy an app on OS X. This is the first time I've written an application on this platform which requires root permissions to run properly, so I need that functionality integrated for every startup attempt.
The application itself is written in Python 2.7, and then compiled to binary using PyInstaller. So far, I've tried:
Running PyInstaller using sudo pyinstaller -w --icon=/path/to/icon /path/to/script.py
Invoking the PyInstaller command using sudo su
I don't know what else to try at this point. Is it something that could be achieved using symlinks? | Forcing a GUI application to run as root | 0.099668 | 0 | 0 | 969 |
25,792,285 | 2014-09-11T16:09:00.000 | 0 | 1 | 0 | 1 | python,bash,shell | 25,974,955 | 2 | true | 0 | 0 | I combined two things:
ran automated tests with the old and new version of pythona nd compared results
used snakefood to track the dependencies and ran the parent scripts
Thanks for the os.walk and os.getppid suggestion, however, I didn't want to write/use any additional code. | 1 | 1 | 0 | I need to test whether several .py scripts (all part of a much bigger program) work after updating python. The only thing I have is their path. Is there any intelligent way how to find out from which other scripts are these called? Brute-forece grepping wasn't as good aas I expected. | How to find out from where is a Python script called? | 1.2 | 0 | 0 | 497 |
25,793,355 | 2014-09-11T17:12:00.000 | 0 | 0 | 0 | 0 | python,tkinter,resize | 47,625,796 | 2 | false | 0 | 1 | Rather then a Label use a Text inside the Frame and add a vertical scrollbar with the Text. Pack both the Text and the Scrollbar in the Frame, using examples online, and connect them up. Once you cancel propagate for the Frame it won't resize. You can then set the Frame to whatever size you want, even set the Text for a specific number of lines and columns of text before you cancel propagate for the Frame, and then you can display as much text as you want at the bottom allowing the user to scroll through if it is too big. | 1 | 0 | 0 | This is pretty simple.
I have a label along the bottom of my app. As a user scrolls through a treeview, the content of that label changes. Sometimes, the content is multiple lines long.
Previously, the label would grow/shsrink to accommodate content. I didn't like this, so I wrapped the label in a frame, gave said frame a fixed height, and set grid_propagate(0). It looks great, but now my text has disappeared.
How can I give the label a fixed size and retain the ability to update its text? | Tkinter, enclosing_frame.grid_propogate(0) to prevent resizing also prevents updating label text | 0 | 0 | 0 | 296 |
25,793,784 | 2014-09-11T17:38:00.000 | 1 | 0 | 1 | 0 | python,emacs | 25,802,726 | 2 | false | 0 | 0 | See also in menu Python/Checks
It offers commands to run known tools resp. bundles of them like pychecker, pylint, pep8, flake8, pyflakes
Make sure these backends are intalled, run "pip install pylint" for example. | 2 | 0 | 0 | Are there commands in python-mode (under EMacs) that can intelligently automatically detect and correct incorrect indentations?
For example, detect correct three spaces to four, etc. Thanks. | Can python-mode correct incorrect indentations? | 0.099668 | 0 | 0 | 60 |
25,793,784 | 2014-09-11T17:38:00.000 | 3 | 0 | 1 | 0 | python,emacs | 25,800,835 | 2 | false | 0 | 0 | In python-mode.el there's a py-indent-region. I just tried it in a simple situations -- and it worked. | 2 | 0 | 0 | Are there commands in python-mode (under EMacs) that can intelligently automatically detect and correct incorrect indentations?
For example, detect correct three spaces to four, etc. Thanks. | Can python-mode correct incorrect indentations? | 0.291313 | 0 | 0 | 60 |
25,793,798 | 2014-09-11T17:38:00.000 | 0 | 0 | 1 | 0 | excel,python-2.7,spss | 25,795,152 | 3 | false | 0 | 0 | Could you perform an arithmetic operation on the field and catch the exception to skip/flag it | 1 | 2 | 0 | I have a number of excel files from other sources, and I need to check that they haven't accidentally put an alpha character in a numeric field. Not all variables are numeric only, and I can't simply purge alpha characters.
How can I flag those cases so that I can bring it to the attention of those sending the data.? | Searching for alpha characters in a column in excel | 0 | 0 | 0 | 119 |
25,795,944 | 2014-09-11T19:59:00.000 | 4 | 0 | 1 | 1 | python,c,openmp,ctypes,intel-mkl | 25,822,184 | 1 | true | 0 | 0 | Having -fopenmp while compiling enables OpenMP support and introduces in the resultant object file references to functions from the GNU OpenMP run-time support library libgomp. You should then link your shared object (a.k.a. shared library) against libgomp in order to tell the run-time linker to also load libgomp (if not already loaded via some other dependency) whenever your library is used so that it could resolve all symbols.
Linking against libgomp can be done in two ways:
If you use GCC to also link the object files and produce the shared object, just give it the -fopenmp flag.
If you use the system linker (usually that's ld), then give it the -lgomp option.
A word of warning for the second case: if you are using GCC that is not the default system-wide one, e.g. you have multiple GCC versions installed or use a version that comes from a separate package or have built one yourself, you should provide the correct path to libgomp.so that matches the version of GCC. | 1 | 2 | 0 | I have a library that I compiled with gcc using -fopenmp and linking to libmkl_gnu_thread.a.
When I try to load this library using ctypes I get the error message
undefined symbol: GOMP_critical_end
Compiling this without openmp and linking to libmkl_sequential.a instead of gnu_thread, the library works fine, but I'd rather not have to build different versions in order to support Python.
How do I fix this error? Do I need to build python from source with openmp support? I'd like to avoid this since users don't want to have to build their own python to use this software.
I'm using python2.7.6. | Python ctypes error GOMP_critical_end when loading library | 1.2 | 0 | 0 | 1,110 |
25,797,443 | 2014-09-11T21:44:00.000 | 2 | 0 | 0 | 0 | python,emacs | 25,802,561 | 1 | false | 0 | 0 | Emacs command pdb is defined in core. It offers the last file in its history for debugging.
Seems you have to replace test.py by the current buffer-file-name. | 1 | 3 | 0 | Under Emacs, I opened a .py file. I want to debug it using pdb.
I hit M-x pdb, then the bottom bar of Emacs asks me:
Run /usr/lib/python2.7/pdb.py (like this): /usr/lib/python2.7/pdb.py test.py
I hit Enter. Then it creates a new buffer showing
Current directory is ~/python_programs/
It doesn't show the prompt of pdb. When I enter pdb commands such as n, they are just entered that new buffer, as if I were editing the buffer. It seems that no pdb is running.
But if I invoke pdb again for my .py file in the same way as above, the bottom bar of Emacs will say:
This program is already being debugged
I am baffled. Do I miss something?
Thanks.
p.s. If it matters, I am using python-mode.el, but I guess the problem has nothing to do with it. | Why can't enter pdb debugger, by M-X pdb test.py? | 0.379949 | 0 | 0 | 220 |
25,797,494 | 2014-09-11T21:48:00.000 | 0 | 0 | 0 | 0 | python,sockets,real-time,long-polling,webhooks | 25,800,249 | 2 | false | 0 | 0 | There is no way to send data to the client without having some kind of connection, e.g. either websockets or (long) polling done by the client. While it would be possible in theory to open a listener socket on the client and let the web server could connect to and sent the data to this socket, this will not work in reality. The main reason for thi is, that the client is often inside an internal network not reachable from outside, i.e. a typical home setup with multiple computers behind a single IP or a corporate setup with a firewall in between. In this case it is only possible to establish a connection from inside to outside, but not the other way. | 1 | 0 | 0 | I am trying to create a python application that can continuously receive data from a webserver. This python application will be running on multiple personal computers, and whenever new data is available it will receive it. I realize that I can do this either by long polling or web sockets, but the problem is that sometimes data won't be transferred for days and in that case long polling or websockets seem to be inefficient. I won't be needing that long of a connection but whenever data is available I need to be able to push it to the application. I have been reading about webhooks and it seems that if I can have a url to post that data to, I won't need to poll. But I am confused as to how each client would have a callback url because in that case a client would have to act as a server. Are there any libraries that help in getting this done? Any kind of resources that you can point me to would be really helpful! | Is it possible to implement webhooks on regular clients? | 0 | 0 | 1 | 191 |
25,798,916 | 2014-09-12T00:26:00.000 | 1 | 0 | 1 | 1 | python,windows,python-2.7,deployment,exe | 25,810,294 | 1 | true | 0 | 0 | The executable creation packages should be able to grab 3rd party packages if they're installed. Sometimes you have to specify what to include if the library abuses Python's importing system or it's not a "pure Python" package. For example, I would sometimes have to specifically include lxml to get py2exe to pick it up properly.
The py2exe project for Python 2 hasn't been updated in quite a long time, so I would certainly recommend one of the alternatives: PyInstaller, cx_freeze or bb_freeze.
I have only seen issues with MSVCP90.dll when using non pure Python packages, such as wxPython. Normally you can add that in your setup.py to include it. If that doesn't work, then you could also add it using an installer utility like NSIS. Or you may just have to state in your README that your app depends on Microsoft's C++ redistributable and include a link to it. | 1 | 0 | 0 | I want to deploy an executable (.exe) from my python2.7 project with everything included. I have seen pyinstaller and py2exe but the problem is that my project uses a lot of third-party packages that are not supported by default. What is the best choice for such cases? Is there any other distribution packager that could be used?
Thank you | Python deployment with third-party libraries | 1.2 | 0 | 0 | 452 |
25,800,481 | 2014-09-12T03:56:00.000 | 0 | 0 | 1 | 1 | python,visual-studio,cpython | 25,815,355 | 1 | false | 0 | 0 | OK, I figured it out
I was using VS 2013 while Python's build system was designed for VS 2010.
I ended up retargeting everything for 2013 (including a small modification to the tix makefile) and it compiled with all non-static symbols (AST and all) as expected.
Python.org's official pre-built Windows libraries still seem to omit the AST symbols. I don't mind building Python from source myself, but I think the official builds should package the whole shebang. | 1 | 0 | 0 | I'm writing a C application that makes use of Python's AST API to transform Python code expressions before emitting bytecode. I've been a longtime POSIX developer (currently OS X), but I wish learn how to port my projects to Windows as well.
I'm using the static libraries (.lib) generated by build.bat in Python's PCBuild directory. The trouble with these libraries is they somehow skip over the symbols in Python/Python-ast.c as well as Python/asdl.c. I need these APIs for their AST constructors, but I'm not sure how to get Visual Studio to export them.
Do I need to add __declspec(dllexport) for static libraries?
EDIT: I do not have this problem with static libraries generated on POSIX platforms | Exporting cpython AST symbols on Windows | 0 | 0 | 0 | 33 |
25,805,200 | 2014-09-12T09:37:00.000 | 0 | 0 | 1 | 1 | python,pip | 25,808,077 | 3 | false | 0 | 0 | I would suggest creating virtual environment if it is possible for you.
You would just use sudo apt-get install python-virtualenv to install virtualenv, then enter your folder where you store python projects and type into terminal virtualenv venv. After that, you can activate it like this source venv/bin/activate.
What it does is it creates almost full copy of python (some libraries are just linked to save space) and everything you do after activating only affects that copy, not global environment. Therefore you can install any set of libraries using pip, update them etc. and you won't change anything outside of virtual environment. But don't forget to activate it first before you do anything. | 1 | 3 | 0 | I'm using a bunch of python packages for my research that I install in my home directory using the --user option of pip. There are also some packages that were installed by the package manager of my distribution for other things. I would like to have a pip command that only upgrade the packages I installed myself with the --user option.
I tried the recommend version pip freeze --local | grep -v '^\-e' | cut -d = -f 1 | xargs pip install -U but this seems to only work using virtualenvs. pip freeze --local is showing packages that are installed for my user and systemwide.
Is there a way to upgrade only the packages installed locally for my user? | update user installed packages with pip | 0 | 0 | 0 | 2,767 |
25,807,271 | 2014-09-12T11:28:00.000 | 0 | 0 | 1 | 0 | python,mongodb,pymongo,upsert | 25,807,361 | 2 | false | 0 | 0 | You can index on one or more fields(not _id) of the document/xml structure. Then make use of find operator to check if a document containing that indexed_field:value is present in the collection. If it returns nothing then you can insert new documents into your collection. This will ensure only new docs are inserted when you re-run the script. | 1 | 0 | 0 | i'm writing a script that puts a large number of xml files into mongodb, thus when i execute the script multiple times the same object is added many times to the same collection.
I checked out for a way to stop this behavior by checkinng the existance of the object before adding it, but can't find a way.
help! | add if no duplicate in collection mongodb python | 0 | 1 | 1 | 491 |
25,814,134 | 2014-09-12T17:59:00.000 | 0 | 0 | 1 | 0 | python,controls,communication | 25,814,194 | 2 | false | 0 | 0 | You have a lot of choices for exhanging messages between programs or components:
You can write output files that other programs can read and act on. You'd like to see if the consumer could watch a directory for a file and react when it arrived.
You could make them distributed components that exchanged messages via sockets or some higher level protocol like HTTP. The communication could be synchronous or asynchronous.
You could connect them as producers writing to message queues or topics and consumers listening to the queue or topic for events. | 1 | 0 | 0 | I have a (probably) simple question that the internet seems to be of no help with. I would like to make several python programs interact within another python program and have no idea how to get them to put input into each other. My eventual idea is to (as a proof of concept) have one program act as the environment and the others act as creatures in that environment. let me clarify: I am sure you have seen those programs that simulate natural environments with the creatures in them interacting. I would like to do the same kind of thing just on a smaller scale (text in the place of fancy 3d graphics if at all). The ultimate goal of this is not to have a complex ecosystem but to see how far I can push the communication between the programs (and my computer's power along the way).
P.S. I would like to continue to run it from the IDLE or from the command line. | Python programs communicating | 0 | 0 | 0 | 60 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.