Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
27,690,388 | 2014-12-29T14:06:00.000 | 0 | 0 | 1 | 0 | python,powershell | 27,753,819 | 1 | true | 0 | 0 | According to Lukas Graf:
Replace source with . (a single dot) and the relative path after it with a full, absolute path | 1 | 1 | 0 | I want to activate this virtual environment:
(G:/virt_env/virt1)
I'm just following a virtualenv tutorial, I have created a virtual environment(look above), the next step Is activating It, but that tutorial was written for Unix, So how do I activate this virtual environment using Powershell 2? This is assuming basic knowledge of Powershell
Edit: Question answered | How do I activate a virtual environment in Powershell 2? | 1.2 | 0 | 0 | 522 |
27,695,564 | 2014-12-29T20:29:00.000 | 2 | 0 | 0 | 0 | python,django,middleware | 27,695,615 | 2 | true | 1 | 0 | I'm not sure why you say that session variables are not the best solution. On the contrary, they are absolutely the right solution for doing this.
In the first view, you can simply do request.session['first_data'] = my_data, and in the second, my_data = request.session.pop('first_data'). | 1 | 1 | 0 | I have a standard django application. In a view (let's name it First) I call an HttpResponseRedirect (to the view called Second) however there is some data I'd like to transfer to the Second view. What options do I have to achieve this?
One would be to set a GET parameter, however this is not a nice solution.
I also could set a session variable, but this is also not the best solution.
Do I have any other possibility? For example could I use some context processor or something similar?
My question might contain not all the information, but I hope someone can give me a good tip. | Django, move variables | 1.2 | 0 | 0 | 46 |
27,710,366 | 2014-12-30T18:09:00.000 | 2 | 0 | 0 | 0 | python,django,python-2.7 | 27,710,578 | 2 | false | 1 | 0 | The problem with frequent use of absolute_import is usually caused by the ambiguity within a package. If you are developing several subpackages and need to constantly use absolute_import to use a top-level package, just rename the problem subpackage. It is anyway a good idea. | 1 | 2 | 0 | I like to use absolute_import function in Python 2.7. Because in Python 2.7 there is no absolute_import
So I have to import it like this.
from __future__ import absolute_import
In my Django project I have a lot of files, like models.py, views.py and so on. And on top of each file I have to put
from __future__ import absolute_import
to be able to use this function.
Question is following:
If there is possibility to import absolute_import only once in project and use it everywhere ? | Import absolute import only once in Django project | 0.197375 | 0 | 0 | 904 |
27,714,535 | 2014-12-31T00:22:00.000 | 3 | 0 | 0 | 0 | python,opencv,pixel,integral | 27,717,883 | 2 | false | 0 | 0 | sumElems function in OpenCV will help you to find out the sum of the pixels of the whole of the image in python. If you want to find only the sum of a particular portion of an image, you will have to select the ROI of the image on the sum is to be calculated.
As a side note, if you had found out the integral image, the very last pixel represents the sum of all the pixels of the image. | 2 | 2 | 1 | I have an image and want to find the sum of a part of it and then compared to a threshold.
I have a rectangle drawn on the image and this is the area I need to apply the sum.
I know the cv2.integral function, but this gives me a matrix as a result. Do you have any suggestion? | how to do the sum of pixels with Python and OpenCV | 0.291313 | 0 | 0 | 17,785 |
27,714,535 | 2014-12-31T00:22:00.000 | 5 | 0 | 0 | 0 | python,opencv,pixel,integral | 27,738,842 | 2 | false | 0 | 0 | np.sum(img[y1:y2, x1:x2, c1:c2]) Where c1 and c2 are the channels. | 2 | 2 | 1 | I have an image and want to find the sum of a part of it and then compared to a threshold.
I have a rectangle drawn on the image and this is the area I need to apply the sum.
I know the cv2.integral function, but this gives me a matrix as a result. Do you have any suggestion? | how to do the sum of pixels with Python and OpenCV | 0.462117 | 0 | 0 | 17,785 |
27,716,752 | 2014-12-31T05:50:00.000 | 1 | 0 | 1 | 0 | python,string,list,comparison,overlapping | 27,716,829 | 3 | false | 0 | 0 | all possible list combinations to string, and avoiding overlaping
elements
Is a combination one or more complete items in its exact, current order in the list that match a pattern or subpattern of the string? I believe one of the requirements is to not rearrange the items in the list (ab doesn't get substituted for ba). I believe one of the requirements is to not rearrange the characters in the string. If the subpattern appears twice, then you want the combinations to reflect two individual copies of the subpattern by themselves as well as a list of with both items of the subpattern with other subpatterns that match too. You want multiple permutations of the matches. | 1 | 0 | 0 | I want to know how to compare a string to a list.
For example
I have string 'abcdab' and a list ['ab','bcd','da']. Is there any way to compare all possible list combinations to the string, and avoid overlaping elements. so that output will be a list of tuples like
[('ab','da'),('bcd'),('bcd','ab'),('ab','ab'),('ab'),('da')].
The output should avoid combinations such as ('bcd', 'da') as the character 'd' is repeated in tuple while it appears only once in the string.
As pointed out in the answer. The characters in string and list elements, must not be rearranged.
One way I tried was to split string elements in to all possible combinations and compare. Which was 2^(n-1) n being number of characters. It was very time consuming.
I am new to python programing.
Thanks in advance. | Python: comparing list to a string | 0.066568 | 0 | 0 | 567 |
27,717,022 | 2014-12-31T06:18:00.000 | 1 | 0 | 1 | 0 | python,string,split | 27,717,145 | 4 | false | 0 | 0 | I am creating a similar program. I created a word list from sentence using .split(). And compared it to a dictionary. Then for unknown words. I used binary map and created all possible combinations of chunks. Then from those combinations I seperated unique chunks. And compared it to dictionary. Now I have all possible combination of unknown word and parts from the word which are from dictionary. I compared both for everypossible chunk combination of unknown word, so that I have least possible (number of chunks - number of words in that chunk from dictionary).
But my method is time consuming. And has problems with ambiguous lines like 'loveisnowhere'. | 1 | 1 | 0 | Well I am trying to parse a particular html response.I have successfully extracted the text from the page in a form of continuous string.
for eg:
The Dormouse's storyThe Dormouse's storyOnce upon a time there were three little sisters and their names wereElsie LacieandTillie \nand they lived at the bottom of a well
Blockquote
My 1st Question is I need to split the string to get individual words like
eg:
storyOnce
should be converted to a list of meaningful words...
[The,....,story,Once,....]
and I also need to get rid of "\n" characters. I tried using
.strip
but it doesn't seem to work. I thinks I may be using it in wrong way.
I am a newbie so please elaborate the answers.That will be helpful. | How to get meaningful words by splitting a continuous string? | 0.049958 | 0 | 0 | 2,646 |
27,717,059 | 2014-12-31T06:21:00.000 | 0 | 1 | 0 | 0 | python-2.7,selenium-webdriver,robotframework | 27,816,296 | 1 | false | 1 | 0 | Have you tried running the test with 'pybot -L TRACE' ? | 1 | 0 | 0 | I am running test scripts in Robot framework using Google Chrome browser.
But when I run scripts consecutively two times, no message log gets generated in message log section in Run tab. This problem is being encountered only while using Chrome.
Can anyone help me on this as why it is occurring. | Using Robot framework with Google Chrome browser | 0 | 0 | 1 | 580 |
27,717,893 | 2014-12-31T07:41:00.000 | 0 | 0 | 0 | 0 | python,django,django-signals | 27,780,840 | 1 | false | 1 | 0 | You should be able to do this by importing only the signals you need to use. You could have a separate set of signal files for each of your sites, importing them relatively to your wsgi.py or manage.py files, which should allow you to override or extend your entire base signal library.
If all of your sites are running from one wsgi.py or manage.py file, you will probably have to test which files to import instead of relying on importing them relative to your cwd. | 1 | 0 | 0 | I have a model that is firing a post_save signal in the django framework set to run more than one site. What I need to do is to be able to override that signal with a signal that will be defined in the specific site that needs this signal and use the one in the main app as a base.
Or maybe in short I want to be able to write these signals or any other code that is specific to each site in its own place but inheriting from the common code. | How to override a signal in the sites framework django framework | 0 | 0 | 0 | 92 |
27,719,695 | 2014-12-31T10:20:00.000 | 0 | 1 | 0 | 1 | python,encryption,https | 27,719,872 | 2 | true | 0 | 0 | if you have no problem rolling out a key file to all nodes ...
simply throw your messages into AES, and move the output like you moved the unencrypted messages ...
on the other side ... decrypt, and handle the plaintext like the messages you handled before ... | 1 | 0 | 0 | I want to create a python program that can communicate with another python program running on another machine. They should communicate via network. For me, it's super simple using BasicHTTPServer. I just have to direct my message to http:// server2 : port /my/message and server2 can do whatever action needed based on that message "/my/message". It is also very time-efficient as I do not have to check a file every X seconds or something similar. (My other idea was to put text files via ssh to the remote server and then read that file..)
The downside is, that this is not password protected and not encrypted. I would like to have both, but still keep it that simple to transfer messages.
The machines that are communicating know each other and I can put key files on all those machines.
I also stumbled upon twisted, but it looks rather complicated. Also gevent looks way too complicated with gevent.ssl.SSLsocket, because I have to check for byte length of messages and stuff..
Is there a simple example on how to set something like this up? | Python network communication with encryption and password protection | 1.2 | 0 | 0 | 595 |
27,724,624 | 2014-12-31T17:57:00.000 | 2 | 0 | 0 | 0 | python,django,apache | 27,725,014 | 1 | true | 1 | 0 | Thats the nature of using it in that format/setup. On development, running as 'manage.py runserver' it auto reloads on file changes.
production/proxy setups like you have, you need to reload/restart the service to have changes take effect. | 1 | 0 | 0 | I am testing a web application on both shared host and Apache localhost, using Django and fastcgi. When I edited my code and refreshing the page, many times the new code does not take effect. I think this is a cache issue, but I don't know how from the application.
For example: adding new url pattern to mysite/urls.py it does not take effect till I restart the Apache server on the localhost or waiting some time on the shared host.
I did not find any entries in mysite/settings.py that may allow any solution for that issue. I use Django 1.7 and Python 3.4.2. | How to cancel Django cache in fastcgi | 1.2 | 0 | 0 | 47 |
27,727,712 | 2015-01-01T01:30:00.000 | 8 | 0 | 1 | 0 | python,string,python-3.x,input,numbers | 27,727,796 | 6 | true | 0 | 0 | You can check if a string, x, is a single digit natural number by checking if the string contains digits and the integer equivalent of those digits is between 1 and 9, i.e.
x.isdigit() and 1 <= int(x) <= 9
Also, if x.isdigit() returns false, int(x) is never evaluated due to the expression using and (it is unnecessary as the result is already known) so you won't get an error if the string is not a digit. | 1 | 5 | 0 | Right now I'm trying to make a simple tic-tac-toe game, and while user chooses the sector of the board for their next move I need to check if the input is a single-digit natural number. I don't think just making a ['1','2','3'...'9'] list and calling an in statement for it is the most optimal thing. Could you suggest anything? | How to check if input is a natural number in Python? | 1.2 | 0 | 0 | 21,561 |
27,730,191 | 2015-01-01T11:03:00.000 | 2 | 0 | 0 | 0 | python,django,facebook,facebook-graph-api | 27,730,225 | 3 | false | 1 | 0 | You'll need to store two versions of the username: one for querying against, and one for display. | 2 | 0 | 0 | I am creating an application that needs to find facebook usernames that I’ve stored in the database, but facebook usernames are both case insensitive and insensitive to periods. For example, the username Johnsmith.55 is the same as johnsmith55 or even j…O.hn.sMiTh.5.5. when sending facebook API requests.
Obviously, I am using the _iexact query command to remedy the case insensitivity, but what can I use to remedy the insensitivity to periods? I know a cop out method is simply to save all usernames to the database after stripping them of periods and also stripping the username that’s being searched of its periods and then querying, but I want to save and display people’s username’s the way that they really appear in their facebook URL (which includes periods) even though facebook API requests technically are insensitive to periods.
Any ideas for a simple method of doing this? Thanks in advance for any help | Django: How to query terms with punctuation (ie: !.;’) insensitivity? | 0.132549 | 0 | 0 | 195 |
27,730,191 | 2015-01-01T11:03:00.000 | 1 | 0 | 0 | 0 | python,django,facebook,facebook-graph-api | 27,730,740 | 3 | false | 1 | 0 | You can also implement your own querying loguc with custom lookups in Django 1.7 or later. | 2 | 0 | 0 | I am creating an application that needs to find facebook usernames that I’ve stored in the database, but facebook usernames are both case insensitive and insensitive to periods. For example, the username Johnsmith.55 is the same as johnsmith55 or even j…O.hn.sMiTh.5.5. when sending facebook API requests.
Obviously, I am using the _iexact query command to remedy the case insensitivity, but what can I use to remedy the insensitivity to periods? I know a cop out method is simply to save all usernames to the database after stripping them of periods and also stripping the username that’s being searched of its periods and then querying, but I want to save and display people’s username’s the way that they really appear in their facebook URL (which includes periods) even though facebook API requests technically are insensitive to periods.
Any ideas for a simple method of doing this? Thanks in advance for any help | Django: How to query terms with punctuation (ie: !.;’) insensitivity? | 0.066568 | 0 | 0 | 195 |
27,731,670 | 2015-01-01T14:32:00.000 | 0 | 0 | 0 | 0 | python,scrapy,popen | 27,731,947 | 2 | false | 1 | 0 | I suggest let python focus on the scrape task and use something else for process control. If it were me, I'd write a small bash script to run your program.
Test that the launcher script works by running it with env -i yourscript.sh because that will make sure it runs without any inherited environment settings.
Once the bash script works correctly, including setting up virtualenv etc, you could have python run that bash script, not python. You've sidestepped any strange environment issues at that point and got yourself a pretty solid launcher script.
Even better, given you have the bash script at that point, use a "proper" process controller (daemontools, supervisor...) spin up the process, restart on crash, etc. | 1 | 4 | 0 | I have scrapy crawler scraping thru sites. On some occasions scrapy kills itself due to RAM issues. I rewrote the spider such that it can be split and run for a site.
After the initial run, I use subprocess.Popen to submit the scrapy crawler again with new start item.
But I am getting error
ImportError: No module named shop.settingsTraceback (most recent call last):
File "/home/kumar/envs/ishop/bin/scrapy", line 4, in <module> execute()
File "/home/kumar/envs/ishop/lib/python2.7/site-packages/scrapy/cmdline.py", line 109, in execute settings = get_project_settings()
File "/home/kumar/envs/ishop/lib/python2.7/site-packages/scrapy/utils/project.py", line 60, in get_project_settings settings.setmodule(settings_module_path, priority='project')
File "/home/kumar/envs/ishop/lib/python2.7/site-packages/scrapy/settings/__init__.py", line 109, in setmodule module = import_module(module)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module __import__(name)ImportError: No module named shop.settings
The subprocess cmd is
newp = Popen(comm, stderr=filename, stdout=filename, cwd=fp, shell=True)
comm -
source /home/kumar/envs/ishop/bin/activate && cd /home/kumar/projects/usg/shop/spiders/../.. && /home/kumar/envs/ishop/bin/scrapy crawl -a category=laptop -a site=newsite -a start=2 -a numpages=10 -a split=1 'allsitespider'
cwd - /home/kumar/projects/usg
I checked sys.path and it is correct ['/home/kumar/envs/ishop/bin', '/home/kumar/envs/ishop/lib64/python27.zip', '/home/kumar/envs/ishop/lib64/python2.7', '/home/kumar/envs/ishop/lib64/python2.7/plat-linux2', '/home/kumar/envs/ishop/lib64/python2.7/lib-tk', '/home/kumar/envs/ishop/lib64/python2.7/lib-old', '/home/kumar/envs/ishop/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7', '/usr/lib/python2.7', '/home/kumar/envs/ishop/lib/python2.7/site-packages']
But looks like the import statement is using "/usr/lib64/python2.7/importlib/__init__.py" instead of my virtual env.
Where am I wrong? Help please? | Scrapy ImportError: No module named project.settings when using subprocess.Popen | 0 | 0 | 0 | 1,463 |
27,732,911 | 2015-01-01T17:19:00.000 | 1 | 0 | 1 | 0 | python,pip,virtualenv | 27,732,951 | 4 | false | 1 | 0 | Either change the permissions of the virtual enviroment directory or recreate it without using sudo. | 3 | 0 | 0 | I'm trying to install django in a virtual environment. I'm in a virtual environment, but when i type pip install django I get a permission denied error. If I try to run sudo pip install django, however, I get sudo: pip: command not found. Not entierly sure where to go from here. | Installing python packages in virtual environment | 0.049958 | 0 | 0 | 1,051 |
27,732,911 | 2015-01-01T17:19:00.000 | 0 | 0 | 1 | 0 | python,pip,virtualenv | 27,732,983 | 4 | false | 1 | 0 | This is a permissions issue and caused by how the virtual environment has been set up. The safest thing to do now is to remove the virtual environment and make sure to recreate it with the user's permissions (no sudo). And as a side note, the command not found error is due to pip not being set up for root. | 3 | 0 | 0 | I'm trying to install django in a virtual environment. I'm in a virtual environment, but when i type pip install django I get a permission denied error. If I try to run sudo pip install django, however, I get sudo: pip: command not found. Not entierly sure where to go from here. | Installing python packages in virtual environment | 0 | 0 | 0 | 1,051 |
27,732,911 | 2015-01-01T17:19:00.000 | 2 | 0 | 1 | 0 | python,pip,virtualenv | 27,733,194 | 4 | false | 1 | 0 | Since you setup your virtual environment with sudo virtualenv /opt/myenv, you now need to run the correct pip instance (i.e. the one inside your newly created virtual environment).
Therefore, your command needs to be sudo /opt/myenv/bin/pip install django | 3 | 0 | 0 | I'm trying to install django in a virtual environment. I'm in a virtual environment, but when i type pip install django I get a permission denied error. If I try to run sudo pip install django, however, I get sudo: pip: command not found. Not entierly sure where to go from here. | Installing python packages in virtual environment | 0.099668 | 0 | 0 | 1,051 |
27,733,864 | 2015-01-01T19:21:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 27,734,418 | 2 | false | 0 | 0 | Depending on the order in which modules are imported isn't very common and this hints that this is not the right approach. Instead the library should include functionality as needed to allow the user to override certain entries.
This also fits Python's views on software development: explicit is better than implicit and everything the user wishes to override should be explicitly spelled out and communicated with the library. This may seem cumbersome but it requires the least amount of magic and is better to maintain in the long run. | 1 | 1 | 0 | The use-case for this is that a library I'm writing uses Python modules to store data that is by necessity heavily interleaved with code; each "database entry" is a subclass of a class defined in a higher module. There is also a module containing functions for searching this "database" which uses introspection to find entries based on filters given by the user, and only checks modules that have been imported already. It's usually configured to return the first result it sees.
This library will also want to interact with user-provided database modules. A user module may want to "override" an entry from another module, and I'd like the order in which the modules are checked to be well defined. Ideally, I'd like the entries to be checked from most recently imported to least recently imported.
How can I sort the contents of sys.modules by the order in which they were imported? | How to determine if one module has been loaded before or after another | 0 | 0 | 0 | 329 |
27,734,053 | 2015-01-01T19:44:00.000 | 1 | 0 | 1 | 1 | python,pip | 27,735,671 | 2 | false | 0 | 0 | For most popular packages, There is a workaround for recent ubuntu systems. For example, I want to install matplotlib. When you order pip install matplotlib, it usually fails because of a missing dependency.
You can use apt-get install python-matplotlib instead. For python3, you can use apt-get install python3-matplotlib | 1 | 35 | 0 | Many python packages have build dependencies on non-Python packages. I'm specifically thinking of lxml and cffi, but this dilemma applies to a lot of packages on PyPI. Both of these packages have unadvertised build dependencies on non-Python packages like libxml2-dev, libxslt-dev, zlib1g-dev, and libffi-dev. The websites for lxml and cffi declare some of these dependencies, but it appears that there is no way to do figure this out from a command line.
As a result, there are hundreds of questions on SO that take this general form:
pip install foo fails with an error: "fatal error: bar.h: No such file or directory". How do I fix it?
Is this a misuse of pip or is this how it is intended to work? Is there a sane way to know what build dependencies to install before running pip? My current approach is:
I want to install a package called foo.
pip install foo
foo has a dependency on a Python package bar.
If bar build fails, then look at error message and guess/google what non-Python dependency I need to install.
sudo apt-get install libbaz-dev
sudo pip install bar
Repeat until bar succeeds.
sudo pip uninstall foo
Repeat entire process until no error messages.
Step #4 is particularly annoying. Apparently pip (version 1.5.4) installs the requested package first, before any dependencies. So if any dependencies fail, you can't just ask pip to install it again, because it thinks its already installed. There's also no option to install just the dependencies, so you must uninstall the package and then reinstall it.
Is there some more intelligent process for using pip? | How to `pip install` a package that has non-Python dependencies? | 0.099668 | 0 | 0 | 4,418 |
27,742,457 | 2015-01-02T12:45:00.000 | 0 | 0 | 0 | 0 | python,django | 27,744,297 | 3 | false | 1 | 0 | I would create the minimal django models on the external databases => those that interact with your code:
Several outcomes to this
If parts of the database you're not interested in change, it won't have an impact on your app.
If the external models your using change, you probably want to be aware of that as quickly as possible (your app is likely to break in that case too).
All the relational databases queries in your code are handled by the same ORM. | 1 | 1 | 0 | I have a typical Django project with one primary database where I keep all the data I need.
Suppose there is another DB somewhere with some additional information. That DB isn't directly related to my Django project so let's assume I do not even have a control under it.
The problem is that I do ont know if I need to create and maintain a model for this external DB so I could use Django's ORM. Or maybe the best solution is to use raw SQL to fetch data from external DB and then use this ifo to filter data from primary DB using ORM, or directly in views.
The solution with creating a model seems to be quite ok but the fact that DB isn't a part of my project means I am not aware of possible schema changes and looks like it's a bad practice then.
So in the end if I have some external resources like DBs that are not related to but needed for my project should I:
Try to create django models for them
Use raw SQL to get info from external DB and then use it for filtering data from the primary DB with ORM as well as using data directly in views if needed
Use raw SQL both for a primary and an external DB where they intersect in app's logic | Django models with external DBs | 0 | 1 | 0 | 437 |
27,743,031 | 2015-01-02T13:31:00.000 | 0 | 0 | 0 | 0 | python,mysql,pymysql | 27,743,210 | 2 | false | 0 | 0 | This happens to be one of the reasons desktop client-server architecture gave way to web architecture. Once a desktop user has access to a dbms, they don't have to use just the SQL in your application. They can do whatever their privileges allow.
In those bad old days, client-server apps only could change rows in the DBMS via stored procedures. They didn't have direct privileges to INSERT, UPDATE, or DELETE rows. The users of those apps had accounts that were GRANTed a limited set of privileges; they could SELECT rows and run procedures, and that was it. They certainly did not have any create / drop / table privilege.
(This is why a typical DBMS has such granular privilege control.)
You should restrict the privileges of the account or accounts employed by the users of your desktop app. (The same is, of course, true for web app access accounts.) Ideally, each user should have her own account. It should only grant access to the particular database your application needs.
Then, if you don't trust your users to avoid trashing your data, you can write, and test, and deploy, stored procedures to do every insert, update, or delete needed by your app.
This is a notoriously slow and bureaucratic way to get IT done; you may want to make good backups and trust your users, or switch to a web app.
If you do trust them tolerably well, then restrict them to the particular database employed by your app. | 1 | 1 | 0 | I have a Python client program (which will be available to a limited number of users) that fetches data from a remote MySQL-DB using the pymysql-Module.
The problem is that the login data for the DB is visible for everyone who takes a look at the code, so everyone could manipulate or delete data in the DB. Even if I would store the login data in an encrypted file, some still could edit the code and insert their own MySql queries (and again manipulate or delete data).
So how can I access the DB from my program and still SELECT, DELETE or UPDATE data in it, but make sure that no one can execute his own (evil) SQL Code (except the ones that are triggered by using the GUI)? | Secure MySQL login data in a Python client program | 0 | 1 | 0 | 934 |
27,744,155 | 2015-01-02T15:03:00.000 | 0 | 0 | 1 | 0 | python,lambda,pygame,eval | 27,744,281 | 1 | true | 0 | 1 | When I understand you correctly, you are mixing Callbacks with Events.Perhaps this makes the problem so problematic.
In Python functions are objects, and can be passed as any other object like strings. So there is no need to pass names and evaluate them. | 1 | 0 | 0 | Versions
Python 3.4 with Pygame 1.9.2
Question
How can I pass the name of a function/method from one module, where this function does NOT exist, to the module containing it without having to resort to making it a string and evaluating it by eval()?
Background
I have a simple MVC pattern for training purposes. For simplicity's sake let's just presume we pressed a button in the menu. Here's what happens:
The controller sends our event, the click as ClickEvent (containing position, mouse button and if the button was pressed or released), to the currently running logic, the menu. The menu then evaluates the click depending on its position and if it happened on a button, it returns a ModelEvent containing the name of a method of the model. The model then receives said ModelEvent and evaluates it. If any visible changes happen, it creates a new ViewEvent (this would go beyond my question).
Approaches
I approached the problem in three ways so far, once by lambda (which doesn't seem to fit my needs at all or I horribly misunderstood it) and twice by using strings with eval(). The shortcomings of the latter approach are obvious: The name of the function has to be passed between the instances as a string and then evaluated by eval() which takes its time.
What would be an efficient way to pass the names of methods and functions from one module, where they don't exist, to another, where they are to be executed, without having to use strings and eval()?
Or, if you see any grave mistakes in my general approach, I'd be glad to hear about it as I'm still learning. | Python: Passing a function name as an argument between modules | 1.2 | 0 | 0 | 116 |
27,745,253 | 2015-01-02T16:28:00.000 | 1 | 0 | 1 | 0 | java,python,time | 27,745,293 | 3 | false | 1 | 0 | Remove the decimal point and convert it to a float. | 1 | 0 | 0 | I have a python date formatted like this 1418572798.498 within a string.
In Java the dates are formatted like this 1418572798498.
How to convert this string to Java date?
Is there any third party library to use for the conversion? | How to convert python time to java date | 0.066568 | 0 | 0 | 2,695 |
27,746,182 | 2015-01-02T17:37:00.000 | 0 | 0 | 1 | 0 | python,json,api,imgur | 28,462,956 | 2 | true | 0 | 0 | Imgur updated their docs, so the new and correct form of the URL I used was:
r = requests.get("https://api.imgur.com/3/gallery/r/earthporn/top/") | 1 | 0 | 0 | I am having a bit of trouble understanding API calls and the URLs I'm supposed to use for grabbing data from Imgur. I'm using the following URL to grab JSON data, but I'm receiving old data: http://imgur.com/r/wallpapers/top/day.json
But if I strip the .json from the end of the URL, I see the top pictures from today.
All I want is the JSON data from the top posts of today from Imgur, but keep getting data the refers to Dec 18th, 2014.
I'm using the call in a Python script. I have a token from Imgur to do the stuff, and reading the API documentation, I see a lot of the examples start with https://api. instead of http://imgur.
Which one should I use? | Correct API call to request JSON-formatted data from Imgur? | 1.2 | 0 | 1 | 627 |
27,747,578 | 2015-01-02T19:36:00.000 | -42 | 0 | 0 | 0 | python-2.7,flask | 27,764,937 | 6 | true | 1 | 0 | There is no way to clear session or anything.
One must simply change the app.config["SECRET_KEY"] and the contents in session dictionary will get erased. | 1 | 37 | 0 | While importing flask, we import modules such as session etc.
SecureCookieSession is a kind of dictionary, that can be accessed using session.
Now, I try to clear all the junk variables that I used while trying to build a website.
One of the answers on stackoverflow used a command like session.clear() for clearing the contents of a session. But such a command gives an error that no such command exists.
Can anyone point out for me how to clear the SecureCookieSession and how to clear the session every time I shutdown the server or close the website? | How do I clear a flask session? | 1.2 | 0 | 0 | 74,779 |
27,749,575 | 2015-01-02T22:49:00.000 | 0 | 0 | 1 | 0 | python,pexpect | 27,749,852 | 1 | false | 0 | 0 | There’s no way to pass complex object directly between two python programs even one spawns another. You should serialise object state and then pass it. other side should deserialise it before use.
If you want to pass data with command-line argument, you should use string object (not bytes, but string). Please note, that repr may not give string which you can use to create “clone”.
Also you can pass data with external files, IPC, TCP, UDP, FIFO or many other ways. | 1 | 0 | 0 | How to pass a pexpect spawn object as argument from one python file to another. I tried to pass it, but the error is that it has to be a string. Then I converted the object to string. But it's not working as expected. | How to pass a pexpect spawn object | 0 | 0 | 0 | 616 |
27,751,049 | 2015-01-03T02:09:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,properties | 27,751,309 | 1 | true | 0 | 0 | This is most probably because your code is being run in an IDE with an integrated debugger, which is re-checking the values of the object's attributes whenever it is accessed. This is not normal behavior for CPython. | 1 | 0 | 0 | Through complete accident and leaving in debugging prints, I've noticed that every time any attribute of my object is accessed, all of its properties are evaluated.
Is this normal behavior in a standard CPython environment?
If so, why, and is there any way to stop this behavior? | Is it normal for all of an object's properties to be evaluated on every attribute access? | 1.2 | 0 | 0 | 33 |
27,760,817 | 2015-01-03T23:45:00.000 | 0 | 0 | 0 | 0 | python,tornado | 27,760,934 | 2 | false | 0 | 0 | this ultimately will be decided if the database is running on the same host and in the instance of MySQL. If it is running in the same instance you should be able to prefix your tables names with the database name. For example; "select splat from foo.bar where splat is not null" where foo is the database name and bar is the table name. Hope this helps! | 2 | 0 | 0 | I am connecting to MySQL database using torndb in Python. Is there a way to switch between databases after connection is established? Like the select_db method? | Torndb - Switch from one database to another | 0 | 1 | 0 | 106 |
27,760,817 | 2015-01-03T23:45:00.000 | 2 | 0 | 0 | 0 | python,tornado | 27,765,801 | 2 | false | 0 | 0 | Switch db:
conn.execute('use anotherDBName'); | 2 | 0 | 0 | I am connecting to MySQL database using torndb in Python. Is there a way to switch between databases after connection is established? Like the select_db method? | Torndb - Switch from one database to another | 0.197375 | 1 | 0 | 106 |
27,761,448 | 2015-01-04T01:27:00.000 | 1 | 0 | 0 | 0 | python,tile,tmx,tiled,cocos2d-python | 27,761,506 | 1 | false | 0 | 1 | All I had to do was
cell.tile.image = image | 1 | 0 | 0 | -I'm using python and cocos2D
I have the file loading a tmx-map but now I want to change a specific tile to display an image from another file, I have saved the specific tile that I want to change in a variable but changing it I don't know how.
Thanks in advance | python cocos2d change tile's image | 0.197375 | 0 | 0 | 266 |
27,761,684 | 2015-01-04T02:11:00.000 | 0 | 1 | 1 | 0 | python,mysql,ruby,json,mongodb | 27,761,882 | 1 | false | 1 | 0 | I think this all boils down to what the most important needs are for the project. These are some of the questions I would try to answer before selecting the technology:
Will I need to access records individually after inserting into the database?
Will I ever need to aggregate the data when reading it (for reporting, for instance)?
Is it more important to the project goals to have the data written quickly or read quickly?
How large do I anticipate the data will grow and will the database technology I select scale easily, cheaply and reliably to support the data volume?
Will the schema of the data change? Do I need a schemaless database solution like MongoDB?
Where are the trade offs between development time/cost, maintenance time/cost and time/cost for running the program?
Without knowing much about the particulars or your project or its goals I would say it's generally not a good idea to store a single JSON object for the entirety of the data. This would likely make it more difficult to read the data and append to it in the future. You should probably apply some more thought on how to model your data and represent it in the database in a way that will make sense when you actually need to use it later. | 1 | 0 | 0 | Im trying to design a system that can periodic "download" a large amount of data from an outside api..
This user could have around 600,000 records of data that I need once, then to check back every hour or so to reconcile both datasets.
Im thinking about doing this in python or ruby in background tasks eventually but I'm curious about how to store the data.
Would it possible/good idea to store everything in one record hashed as json vs copying each record individually?
It would be nice to be able to index or search the data without anything failing so I was wondering what would be the best implementation memory wise.
For example if the a user has 500,000 tweet records and I want to store all of them, which would be a better implementation?
one record as JSON => user_1 = {id:1 twt:"blah"},{id:2 twt:"blah"},.....{id:600,000 twt:"blah"}
vs
many records =>
id:1 outside_id=1 twt:"blah"
id:2 outside_id=1 twt:"blah"
id:3 outside_id=1 twt:"blah"
I'm curious how I would find out how memory intensive each method is or what is the best solution.
The records are alot more complex with maybe 40 attributes per record I wanted to store.
Also would MySQL or MongoDB be a better solution for fastest copy/storage? | Best Way to Store Large Amount of Outside API Data... using Ruby or Python | 0 | 0 | 0 | 149 |
27,763,265 | 2015-01-04T07:01:00.000 | 0 | 0 | 1 | 1 | python,bash,cygwin,alias | 27,763,299 | 2 | false | 0 | 0 | The error indicate that the bash.bashrc file was not loaded.
Check these:
The bashrc filename is usually .bashrc / .bash_profile and it's in the user's home directory. Or it should exists as /etc/bash.bashrc.
The file is not loaded automatically, restart the shell to load it. | 1 | 0 | 0 | When I try to set aliases, I get two different results using two different methods. I'm using cygwin.
METHOD 1:
In cygwin I execute the following:
alias python='/cygdrive/c/python27/python.exe'
python <name of python file>
...and it runs just fine, as expected
METHOD 2:
In my bash.bashrc file I add the following line:
alias python='/cygdrive/c/python27/python.exe'
python <name of python file>
...and I get the following error:
:no such file or directoryython.exe
Just two questions:
What is the difference between these two methods?
Why is one causing and error and one is not? Thanks <3 | Set aliases in bash.bashrc | 0 | 0 | 0 | 853 |
27,770,573 | 2015-01-04T21:32:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 27,770,590 | 1 | false | 0 | 0 | You have the "your string".upper() or "your string".lower() functions, which will allow you to compare the strings, assuring the comparison is case insensitive. | 1 | 0 | 0 | For example i am given this string as an input:
"HeLlo"
How can i make this case insensitive for later uses?
I want it to be equal to "hello" or "HELLo" etc... | How can i make a string case insensitive in Python? | 0.197375 | 0 | 0 | 2,665 |
27,772,452 | 2015-01-05T01:56:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,django-allauth | 27,787,697 | 1 | false | 1 | 0 | Problem solved: the secret key in my configuration file started with an (unicode) opening quote, instead of a regular double quote, probably a result of copying and pasting the original file that had been written with an editor that uses these "smarter" quotes. | 1 | 0 | 0 | I'm trying to run a django website on my local computer. It works fine on an external server, but I didn't set it up and right now I don't have access to all the specs.
The issue I have is when I try to log in the web site as a user, which has been defined. Running in debug mode I get a detailed error page containing on top the message:
UnicodeDecodeError at /accounts/login/
'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)
Looking down I can see that the error occurs in crypto.py, function salted_hmac at the line
key = hashlib.sha1((key_salt + secret).encode('utf-8')).digest()
and displaying the local variables I see
key_salt u'django.contrib.sessionsSessionStore'
secret '\xe2\x80\x9cXXX"'
value '{}'
where XXX is a 50 character string identical to the SECRET_KEY defined in my configuration file. Variable secret is assigned in the function through:
if secret is None:
secret = settings.SECRET_KEY
and I know that secret is None at this point since it is a third parameter in salted_hmac not used by the caller. I strongly suspect that the error occurs because python can not handle the unicode characters at the beginning of the variable secret.
So I have a few questions:
1) Why is setting.SECRET_KEY different from the SECRET_KEY I defined in the configuration file? Is it how it should be? And if it is do I have any control over what it should be?
2) Could something in my environment be responsible for this?
A few notes: As I mentioned it works on a server, running ubuntu 1.6, python 2.7. However I can not right now obtain the info on the versions for the other packages. But even if I could I still want to know why it doesn't work on my installation. I have tested with django 1.6.1, python 2.7 on lubuntu 14.04, opensuse 13.2, with identical results.
Thanks for any help or hint.
a | UnicodeDecodeError in django 1.6 and allauth | 0 | 0 | 0 | 135 |
27,775,759 | 2015-01-05T08:27:00.000 | 0 | 0 | 0 | 0 | python,selenium,selenium-webdriver,functional-testing | 55,522,157 | 6 | false | 0 | 0 | By importing Keys Class, we can open page in new tab or new window with CONTROL or SHIFT and ENTER these keys:
driver.find_element_by_xpath('//input[@name="login"]').send_keys(Keys.CONTROL,Keys.ENTER)
or
driver.find_element_by_xpath('//input[@name="login"]').send_keys(Keys.SHIFT,Keys.ENTER) | 1 | 26 | 0 | I need to open link in new tab using Selenium.
So is it possible to perform ctrl+click on element in Selenium to open it in new tab? | Send keys control + click in Selenium with Python bindings | 0 | 0 | 1 | 102,116 |
27,776,107 | 2015-01-05T08:56:00.000 | 1 | 0 | 1 | 0 | python,kivy | 27,779,076 | 1 | false | 0 | 1 | Just replace the text with some function of the original string, e.g. textinput.text = ''.join('something ', textinput.text.replace('a', 'b'), ' and something else'). | 1 | 2 | 0 | TextInput class
Is there any way to append or edit text ???
Problem:
I created TextInput object and accessed text method to print strings.
But every time I access text method, it prints new string and it doesn't retain previous value.
How to fix the issue?
what is alternative for TextInput? | edit or append text (TextInput class) in kivy | 0.197375 | 0 | 0 | 1,054 |
27,778,737 | 2015-01-05T11:37:00.000 | 0 | 0 | 0 | 0 | python,django,python-2.7,django-filter | 60,852,547 | 3 | false | 1 | 0 | maybe you try use some js solution.
just get all the current attributes from url by using js ,build dict.
when clicking other attribute button,you can add or replace the attribute in the dict.
use this dict to form your url. | 1 | 2 | 0 | Scenario:
Products can have multiple attributes defined in the database, I want to be able to filter by those attributes.
Since i have not found a way to use dynamic attributes using django-filter, this is currently achieved by using django filter MethodFilter which parses the attributes passed as query string as:
/products?attribute=size_2&attribute=color_red
url like this is parsed and it works.
The problem is building the url:
I couldn't find a reasonable way to build the url, that would take current search parameters into account and add/replace those parameters.
Django seems to be forcing me to use urlconf but django-filter uses query string parameters.
What i try to achieve is this:
The user is on page /products?attribute=size_10 which display all products with that size.
When he clicks th link "color red" the new url becomes: /products?attribute=size_10&attribute=color_red
Can you point me to the django way of implementing this? | How to create link for django filter | 0 | 0 | 0 | 2,342 |
27,786,842 | 2015-01-05T19:53:00.000 | 0 | 0 | 0 | 1 | google-app-engine,google-drive-api,document-management,google-apps-for-education,google-app-engine-python | 27,791,624 | 1 | true | 1 | 0 | Since you'll be building your own software, the answer to "will it do what I want" is always "yes, eventually".
You'll need to make a decision about document formats, which in turn will influence your indexing mechanism. Specifically, you have two primary options:-
convert the files to Google document formats (doc, spreadsheet, etc). You will then be able to use Google's own indexing and search, eg. as you would from drive.gogle.com. The downside is that formatting may be lost during the import/export round trip.
keep the documents in their native format (eg. MS .docx), and perform your own indexing. This will require parsing each document type, which is non-trivial, but I'm sure there are third party libraries to assist. The upside is that the documents you retrieve are the identical documents you imported.
I think I would look at doing both of the above. Thus when you import a file into your DMS you store it twice into Google Drive, converted and unconverted. Use App Engine datastore to keep track of the pairings. This way you can use the Drive search to find the converted document, but the file you serve back to the user is its unconverted twin. | 1 | 1 | 0 | I administer a University's document management system. The system is a 3rd party that integrates with another 3rd party database that acts as our ERP system. The DMS is quite clunky and has a wide array of terrible bugs / lacks features & support. I've been playing around with Google App Engine / Drive SDK in my free time out of curiosity. Since we are a Google Apps for Education customer, we have unlimited drive space and all our users are Google apps users.
Would it be feasible to internally build a web application (potentially powered by Google App Engine) that utilizes the Drive SDK to manage all the university's files (~ 6 TB). From my experimenting it seems to have all the capabilities required. | Google Drive / App Engine for Document Management System | 1.2 | 0 | 0 | 1,208 |
27,788,609 | 2015-01-05T21:58:00.000 | 0 | 0 | 0 | 0 | python,opencv,classification,cascade | 27,810,170 | 1 | true | 0 | 0 | I solved it!
I downloaded opencv and all other required programs on another computer and tried running train classifier on another set of pictures. After I verified that it worked in the other computer I copied all files back to my computer and used them. | 1 | 0 | 1 | I have been successful at training a classifier before but today I started getting errors.
Problem:
When I try to train a classifier using opencv_traincascade.exe I get the following message:
"Training parameters are loaded from the parameter file in data folder!
Please empty the data folder if you want to use your own set of parameters."
The trainer then stops midway in stage 0 with the following message:
===== TRAINING 0-stage =====
BEGIN
POS count : consumed 2 : 2
Train dataset for temp stage can not be filled. Branch training terminated.
Cascade classifier can't be trained. Check the used training parameters.
Here is how I got to the problem:
I had a parameters file inside the classifier folder where my trainer would usually train classifiers to. I forgot to delete this parameters file before running the traincascade.exe file. Even though I erased the parameter file I still got the same error.
Thanks for helping. | opencv_traincascade.exe error, "Please empty the data folder"? | 1.2 | 0 | 0 | 359 |
27,789,333 | 2015-01-05T22:55:00.000 | 1 | 1 | 0 | 0 | python,google-analytics,pypi | 27,789,541 | 2 | false | 0 | 0 | It's not possible to include Google Analytics code for a project on PyPI. However, you can include it on the project's website (if any) and other pages related to the project, such as documentation. | 1 | 3 | 0 | I'd like to track my pypi project using google analytics. I was wondering where exactly I should embed the google analytics' code? | How to track a PYPI project using google analytics? | 0.099668 | 0 | 0 | 313 |
27,790,165 | 2015-01-06T00:16:00.000 | 0 | 0 | 1 | 1 | python,windows | 27,790,624 | 4 | false | 0 | 0 | Any good programmer's text editor will do. I personally use SublimeText 3, but I've used Eclipse + PyDev before to great success, and the usual suspects (emacs, vim, Notepad++) will work just fine too. | 1 | 0 | 0 | I've been writing and using short Python scripts (~100 lines) for various tasks in Ubuntu using the Geany text editor, which I like for it's simplicity (setup, F5 to run, etc.) and syntax highlighting.
I would like to know if there is a similar application for Windows. Because what I've found so far requires downloading 3 different applications or using a big IDE like eclipse. | Python development in Windows | 0 | 0 | 0 | 139 |
27,796,040 | 2015-01-06T09:50:00.000 | 0 | 1 | 1 | 0 | python,scripting,diff | 27,796,129 | 4 | false | 0 | 0 | Git can do that, check out github its exactly what you look for | 1 | 0 | 0 | The problem statement is:
Given 2 python files 'A.py' and 'B.py' (modified A.py), is there a way we can find out the:
1.Added methods
2.Removed methods
3.Modified methods : (a) Change in method prototype (b) Change in method content
Similarly for classes(changed/removed/modified) as well.
My Solution:
I was thinking if i could use a good diff tool, and find out the added/removed/modified lines, i can parse them to find out the required details.
I tried with git-diff but it gives line-wise diff. So if a method got shifted because some other method was added before that, it shows the method as deleted from the original file and added in the later file.
I saw 'meld' gives good diff between files which i could use easily, but i could not find a way to programmatically capture the output of meld.
Please provide any follow up on my solution, or any other solution for the problem
FYI: I want to automate this as there are many such files. Also, this has to be done on a linux box. | Find difference between 2 python files | 0 | 0 | 0 | 2,811 |
27,796,289 | 2015-01-06T10:06:00.000 | 0 | 0 | 0 | 0 | python,resources,simpy | 27,813,798 | 1 | false | 0 | 0 | Using a preemptive resource was my first idea, too. If this does not work for you, you may have to subclass Resource and/or the corresponding event classes. You can use the other, more specialized Resource subclasses as an example. | 1 | 0 | 0 | I would like to simulate a system where resources have an opening time during the day:
When a processes request a resource, the resource gives its availability only if there is enough time left in the day to complete the process (the process would declare how much time it needs); otherwise, the resource waits until the next day, holding the process in the queue without letting other processes to jump ahead.
I was thinking to implement a preemptive resource and a special high priority process that keeps the resource busy during the closing time. Unfortunately, if one process gets preempted, it is interrupted and gets out of the queue; thus, it seems that I cannot use preemption.
How in your opinion could be opening time simulated?
Thanks in advance for your answers!
Fausto | simpy resources opening time | 0 | 0 | 0 | 149 |
27,798,842 | 2015-01-06T12:36:00.000 | 1 | 0 | 0 | 0 | python,flask,eve | 41,322,114 | 3 | false | 1 | 0 | try set import_name arg for Eve:
app = Eve(import_name=__name__) | 1 | 5 | 0 | I am running Flask and Eve on localhost at a same time. The Flask app serves static files and makes requests to the Eve app to get some data. I want to run Eve only, without a separate Flask app. How can I serve static files with Eve? | Serve static files with Eve | 0.066568 | 0 | 0 | 1,052 |
27,799,692 | 2015-01-06T13:28:00.000 | 0 | 0 | 0 | 0 | python,zip,epub,epub3 | 28,436,076 | 2 | true | 1 | 0 | The solution I've found:
delete the previous mimetype file
when creating the new archive create an new mimetype file before adding anything else : zipFile.writestr("mimetype", "application/epub+zip")
Why does it work : the mimetype is the same for all epub : "application/epub+zip", no need to use the original file. | 1 | 0 | 0 | I'm working on a script to create epub from html files, but when I check my epub I have the following error : Mimetype entry missing or not the first in archive
The Mimetype is present, but it's not the first file in the epub. Any idea how to put it in first place in any case using Python ? | epub3 : how to add the mimetype at first in archive | 1.2 | 0 | 1 | 1,098 |
27,800,953 | 2015-01-06T14:39:00.000 | 0 | 0 | 1 | 0 | python,python-idle | 27,810,174 | 1 | false | 0 | 0 | You did not specify OS; I suspect Windows and am answering from a Windows 7 perspective. Most entries in the right click menu corresponds to a command line, with the file name substituted for a wildcard placeholder. (Exceptions include cut and `copy'.) Some entries are present for all files. Some, such as this one, are specific to a type of file as indicated by the extension. The right-click menu is governed by the Windows registry.
When multiple versions of Python are installed, one is designated as the default version. If I wanted to change the designation (to 3.4, for instance), I would either go to Control Panel, Programs and Features and 'change' the Python 3.4 installation, or just re-install it from the installer.
If you do not want to do that, you can 'Edit with Idle' (2.7) and the file will then appear under File -> Recent Files on the 3.4 menu.
I have considered requesting that there be an "Edit with Idle x.y' entry for each Python version installed. I do not know if there are techical reasons preventing that. | 1 | 0 | 0 | I currently have python 2.7.6 and 3.4.0 installed. Whenever I rightclick and open with idle, and I try to run it in idle, the version used in 2.7.6. However, I need it to run in 3.4.0. I can work around this issue by opening idle manually and opening a file from idle itself, but this is pretty tedious and I'm wondering if anyone knows a solution to this issue? | Can't open "edit with idle" for python 3.4 with python 2.7 and 3.4 installed. Can only open idle manually | 0 | 0 | 0 | 1,389 |
27,803,331 | 2015-01-06T16:50:00.000 | 0 | 0 | 0 | 0 | python,hdf5,pytables,h5py | 55,962,515 | 2 | false | 0 | 0 | It is unavoidable to not copy that section of the dataset to memory.
Reason for that is simply because you are requesting the entire section, not just a small part of it.
Therefore, it must be copied completely.
So, as h5py already allows you to use HDF5 datasets in the same way as NumPy arrays, you will have to change your code to only request the values in the dataset that you currently need. | 1 | 10 | 1 | I have to work on large 3D cubes of data. I want to store them in HDF5 files (using h5py or maybe pytables). I often want to perform analysis on just a section of these cubes. This section is too large to hold in memory. I would like to have a numpy style view to my slice of interest, without copying the data to memory (similar to what you could do with a numpy memmap). Is this possible? As far as I know, performing a slice using h5py, you get a numpy array in memory.
It has been asked why I would want to do this, since the data has to enter memory at some point anyway. My code, out of necessity, already run piecemeal over data from these cubes, pulling small bits into memory at a time. These functions are simplest if they simply iterate over the entirety of the datasets passed to them. If I could have a view to the data on disk, I simply could pass this view to these functions unchanged. If I cannot have a view, I need to write all my functions to only iterate over the slice of interest. This will add complexity to the code, and make it more likely for human error during analysis.
Is there any way to get a view to the data on disk, without copying to memory? | Is there a way to get a numpy-style view to a slice of an array stored in a hdf5 file? | 0 | 0 | 0 | 568 |
27,803,633 | 2015-01-06T17:08:00.000 | 0 | 0 | 0 | 0 | python,django,django-signals | 61,509,996 | 5 | false | 1 | 0 | In the signals.post_save.connect(receiver=create_customer, sender=Customer)... sender will always be the model which we are defining... or we can use the User as well in the sender. | 1 | 33 | 0 | I am new to Django and I'm not able to understand how to work with Django signals. Can anyone please explain "Django signals" with simple examples?
Thanks in advance. | Django - signals. Simple examples to start | 0 | 0 | 0 | 24,175 |
27,804,805 | 2015-01-06T18:26:00.000 | 1 | 0 | 0 | 0 | tkinter,pypi,qpython | 30,358,514 | 1 | false | 0 | 1 | Tkinter is not installed with QPython, unlike SL4A and Kivy. You'll have to install it yourself. There is a pip console among the scripts, so I'd try using that first. | 1 | 1 | 0 | I'm running QPython on my Android device. Whenever I try to import Tkinter, it says, "no modules found". What is the package name for Tkinter in PyPI? Or where can I download the Tkinter package for QPython? | QPython on Android doesn't get imported | 0.197375 | 0 | 0 | 6,218 |
27,807,343 | 2015-01-06T21:18:00.000 | 0 | 1 | 0 | 0 | python,swig,pybuilder | 27,814,800 | 1 | false | 0 | 1 | is it possible you build the .so library for another python version?
PyBuilder does not do anything special about shared objects, especially not when running unit tests.
So try running ldd _A.so and see if that matches the interpreter you're using with pyb? | 1 | 0 | 0 | I have a Python wrapper (to a C lib) generated by Swig.
Have unittest run happy within PyDev.
Project structure follow PyBuilder suggested setup:
|-src
|-main
|-python
|-A.py
|-_A.so
|-unittest
|-python
|-A_tests.py
when try run pyb, got following error:
Fatal Python error: PyThreadState_Get: no current thread
Abort trap: 6
NOTE:
If I change A to a pure Python module, everything works.
Must be something (step) missing related to load that .so file.
Sorry for a newbie question like this. Any help will be greatly appreciated. | PyBuilder broken for Swig-Python generated wrapper project | 0 | 0 | 0 | 53 |
27,808,101 | 2015-01-06T22:10:00.000 | 2 | 0 | 1 | 0 | python,session,flask | 27,852,529 | 2 | true | 1 | 0 | Note to self: make sure memcached is running. | 1 | 1 | 0 | I have a Python project built using the Flask framework.
The project is established and I have installed and ran it on numerous machines (Ubuntu and OSX) using virtualenv.
I went to set up my project on a new computer with Yosemite installed. It installed all of the requirements with no errors and the site runs locally without errors.
The problem is that the Flask session is always an empty dict (nothing is ever in the session). | Python - Flask not storing session | 1.2 | 0 | 0 | 794 |
27,809,691 | 2015-01-07T00:37:00.000 | 0 | 1 | 1 | 0 | python,login | 67,327,565 | 2 | false | 0 | 0 | I suggest you read passwords either using no-echo from a TTY or just a direct stdin line read, only as a fallback.
Instead call a "password helper" program that is provided via an environment variable such as "TTY_ASKPASS". ("ssh" and even "sudo" can do this!)
This means not only can a user provide a 'ask password with stars' but they can also input the password from other sources like a keyring daemon, or GUI popup.
Do not limit your users! | 1 | 0 | 0 | I made a login to a python program I created. Is there any way to make the characters show up as stars or something other than the actual character? I am using Python version 3.4.2, I need something I can put in my script. Thanks! | Special Characters For Python | 0 | 0 | 0 | 487 |
27,810,571 | 2015-01-07T02:30:00.000 | 1 | 0 | 1 | 0 | python,loops | 27,810,605 | 2 | false | 0 | 0 | Check out the module matplotlib, it was made to give plotting visuals in python. I hope this helps a little. | 1 | 0 | 0 | I am a beginner so I don't think I need to use anything complicated.
Basically I have to print y=x^+3 for the rangex=0 to x=4 using formatted output and I don't know how.
From what I have learned so far, I'm supposed to use formatted output, looping and variable width output to do this.
Does anyone know how to do it? Thank you very much. | How to formatted output a graph of an equation using python? | 0.099668 | 0 | 0 | 102 |
27,810,710 | 2015-01-07T02:46:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 27,810,779 | 1 | true | 0 | 1 | All the downloaded libraries are in the C:\Python27\Lib\site-packages. You can check this folder before you uninstall a version of Python.
I agree with Rinzler, for Python, both versions will have its own Tkinter. Maybe it's just a problem of interpreter choice of your IDE. | 1 | 0 | 0 | I have been having problems importing Tkinter. I have done research here and found that it's because I have had both 64 bit and 32 bit python on my machine. I currently use 32 bit but Tkinter is pointing to the 64 bit version. I think the easiest fix is to uninstall python and reinstall it. Will I loose all my downloaded libraries and code I've written if I do this?
It's python 2.7 on Windows 7. | Uninstalling Python - Do I loose my installed packages and code? | 1.2 | 0 | 0 | 2,920 |
27,813,209 | 2015-01-07T05:52:00.000 | 0 | 0 | 1 | 0 | python,intellij-idea | 27,828,457 | 1 | true | 0 | 1 | Normally the binary modules for a Python interpreter are rescanned on IntelliJ IDEA restart. Please try restarting the IDE. | 1 | 0 | 0 | community!
My problem:
I have an item, namely gi.repository.Gtk, marked as "Unresolved reference: Gtk".
The Gtk module did not exist at the moment of setting up Python SDK in Idea, however I've installed it little bit later.
I can't get how do I force re-sync of classpath for python? | intellij: update classes in classpath for python plug-in | 1.2 | 0 | 0 | 93 |
27,814,764 | 2015-01-07T07:54:00.000 | 0 | 1 | 0 | 0 | python,firefox,freebsd,yahoo-mail | 27,815,406 | 1 | false | 1 | 0 | Check if the button is inside an iframe. If it is then switch to frame and try it again. | 1 | 0 | 0 | I have a python script that sends a mail from Yahoo including an attachment. This script runs on a Freebsd 7.2 client and uses firefox : Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.0.10) Gecko/2009072813 Firefox/3.0.10. The script fails with the error - Element xpath=//input[@value="Send"] not found. Checked the Page source, the x-Path exists. However, it is not visible in the compose page.
Kindly help me sort out this issue. | Send button in yahoo mail page is not not visible - Firefox, Freebsd 7.2 | 0 | 0 | 1 | 673 |
27,816,562 | 2015-01-07T09:47:00.000 | 8 | 0 | 1 | 0 | python | 27,943,402 | 1 | true | 0 | 0 | struct is a module for packing and unpacking data to and from C representations. P represents void * (a generic pointer). On 32-bit systems a pointer is 4 bytes, and on a 64-bit system a pointer requires 8 bytes. struct.calcsize('P') calculates the number of bytes required to store a single pointer -- returning 4 on a 32-bit system and 8 on a 64-bit system. | 1 | 2 | 0 | I stumbled upon this while going through installation instructions of scikit-learn.
To check the architecture of your system, whether it is 32 or 64 bit
What does it exactly mean? What does P format specifier mean? How does it differ in a 32 bit system and a 64 bit system.
What happens when I put different specifiers? | What does struct.calcsize('P') exactly mean? | 1.2 | 0 | 0 | 3,201 |
27,817,594 | 2015-01-07T10:42:00.000 | 0 | 1 | 1 | 0 | python,xeon-phi | 32,990,976 | 1 | false | 0 | 0 | Why not first port it from Python (which is bytecode for a virtual machine -- that is a software emulation of a CPU -- then to be translated and executed on a certain 'real' hardware CPU). You could port to C++ or so, which -- when compiled for the target platform -- produces machine code that runs natively on the target. That should improve execution speed, right, so you may not even need a Xeon Phi. | 1 | 2 | 0 | I would like to port a semi-HPC code scriptable with Python to Xeon Phi, to try out the performance increase; it cannot be run in offload mode (data transfers would be prohibitive), the whole code must be run on the co-processor.
Can someone knowledgeable confirm that it means I will have to "cross-compile" all the libraries (including Python) for the Xeon Phi arch, have those libs mounted over NFS on the Xeon Phi, and then execute it all there?
For cross-compilation: what is the target arch? Of course, for numerics the xeon-phi is a must due to extended intrinsics, but for e.g. Python, would the binaries and libs be binary-compatible with amd64? That would make it much easier, essentially only changing some flags for the number-crunching parts.
UPDATE: For the record, we've had a very bad support from Intel on the forums; realizing poor technical state of the software stack (yocto could not compile and so on), very little documentation and so on, we abandoned this path. Goodbye, Xeon Phi. | Running Python on Xeon Phi | 0 | 0 | 0 | 1,664 |
27,819,021 | 2015-01-07T11:58:00.000 | 2 | 0 | 0 | 0 | python,matplotlib,scale,histogram,logarithm | 27,820,207 | 1 | false | 0 | 0 | So I assume that you want to have a logscale on the y axis from what you have written.
Obviously, what you want to achieve won't be possible. log(0) ist NaN because log(0) is not defined mathematically. You could, in theory, set ylim to a very small number close to 0, but that wouldn't help you either. Your y axis would become larger and larger as you approach 0, so you couldn't display whatever you want to show in a way that would make any sense. | 1 | 0 | 1 | Is there any way to plot a histogram in matplot lib with log scale that include 0?
plt.ylim( ymin = 0 ) doesn't work because log(0) is NaN and matplot lib removes is... :( | Distinct 0 and 1 on histogram with logscale | 0.379949 | 0 | 0 | 45 |
27,819,930 | 2015-01-07T12:50:00.000 | 0 | 0 | 0 | 0 | python,django,database,postgresql,security | 27,964,212 | 3 | false | 1 | 0 | Yes, this is practiced sometimes, but not commonly. The best way to do it is to grant specific privileges on user, not in django. Making such restrictions means that we should not trust application, because it might change some files / data in db in the way that we do not expect it to do so.
So, to sum up: create another user able to create / modify data and user another one with restrictions to use normally.
It's also quite common in companies to create one user to insert data and another one for employees / scripts to access it. | 2 | 2 | 0 | I'm running Django with Postgres database. On top of application-level security checks, I'm considering adding database-level restrictions. E.g. the application code should only be able to INSERT into log tables, and not UPDATE or DELETE from them.
I would manually create database user with appropriate grants for this. I would also need a more powerful user for running database migrations.
My question is, do people practice things like this? Any advice, best practices on using restricted database users with Django?
Edit: To clarify, there's no technical problem, I'm just interested to hear other people's experiences and takeaways. One Django-specific thing is, I'll need at least two DB users: for normal operation and for running migrations. Where do I store credentials for the more privileged user? Maybe make manage.py migrate prompt for password?
As for the reasoning, suppose my app has a SQL injection vulnerability. With privileged user, the attacker can do things like drop all tables. With a more limited user there's slightly less damage potential and afterwards there's some evidence in insert-only log tables. | Restricted database user for Django | 0 | 1 | 0 | 1,088 |
27,819,930 | 2015-01-07T12:50:00.000 | 1 | 0 | 0 | 0 | python,django,database,postgresql,security | 27,972,123 | 3 | false | 1 | 0 | For storing the credentials to the privileged user for management commands, when running manage.py you can use the --settings flag, which you would point to another settings file that has the other database credentials.
Example migrate command using the new settings file:
python manage.py migrate --settings=myapp.privileged_settings | 2 | 2 | 0 | I'm running Django with Postgres database. On top of application-level security checks, I'm considering adding database-level restrictions. E.g. the application code should only be able to INSERT into log tables, and not UPDATE or DELETE from them.
I would manually create database user with appropriate grants for this. I would also need a more powerful user for running database migrations.
My question is, do people practice things like this? Any advice, best practices on using restricted database users with Django?
Edit: To clarify, there's no technical problem, I'm just interested to hear other people's experiences and takeaways. One Django-specific thing is, I'll need at least two DB users: for normal operation and for running migrations. Where do I store credentials for the more privileged user? Maybe make manage.py migrate prompt for password?
As for the reasoning, suppose my app has a SQL injection vulnerability. With privileged user, the attacker can do things like drop all tables. With a more limited user there's slightly less damage potential and afterwards there's some evidence in insert-only log tables. | Restricted database user for Django | 0.066568 | 1 | 0 | 1,088 |
27,821,571 | 2015-01-07T14:24:00.000 | 5 | 0 | 0 | 0 | python,gdal | 27,829,029 | 2 | true | 0 | 0 | Currently both FileGDB and OpenFileGDB drivers handle only vector datasets. Raster support is not part of Esri's FGDB API.
You will need to use Esri tools to export the rasters to another format, such as GeoTIFF. | 1 | 0 | 1 | I'm working on a tool that converts raster layers to arrays for processing with NumPy, and ideally I would like to be able to work with rasters that come packaged in a .gdb without exporting them all (especially if this requires engaging ArcGIS or ArcPy).
Is this possible with the OpenFileGDB driver? From what I can tell this driver seems to treat raster layers the same as vector layers, which gives you access to some data about the layer but doesn't give you the ReadAsArray functionality. | Working with rasters in file geodatabase (.gdb) with GDAL | 1.2 | 0 | 0 | 2,796 |
27,823,606 | 2015-01-07T16:09:00.000 | 0 | 1 | 0 | 0 | angularjs,python-3.4,simplehttpserver | 31,328,765 | 1 | true | 1 | 0 | resolved it by re-installing python on my machine. | 1 | 0 | 0 | I'm not savvy with Python or server programming at all. My AVG blocked Python from running SimpleHTTPServer. I was able to install Python 3.4.2 successfully, but noticed that SimpleHTTPServer has been moved into HTTP server.
How can I set up my machine or Python 3.4.2 so that I can just type python -m SimpleHTTPServer when working on my AngularJS projects locally?
I'm running Windows 7 64.
Thanks, | python simple server with 3.4.2 | 1.2 | 0 | 0 | 192 |
27,824,959 | 2015-01-07T17:19:00.000 | 3 | 0 | 1 | 0 | python,numpy,cython | 27,832,938 | 1 | false | 0 | 0 | You have summed up the situation correctly. As of this writing, you can do one of three things:
Modify NumPy to allow sharing the declarations in mtrand.pxd
Use NumPy's random generators through their default interface (perhaps you could store all the random numbers in advance outside of the nogil block?)
Use a random number generator written in C (or possibly C++ if you are having Cython generate C++ code).
Honestly, I'd probably do the last one. If you can use C++ 11, there are several good random number generators now included in the C++ standard library that you could use. | 1 | 4 | 0 | I am trying to cythonise something I did which involves random number generation inside a parallelised loop. I wanted to use mtrand but since it's Python code it can't work from a nogil block and for some reason mtrand's .pyx isn't exposed for the rest of us to use.
I know I can use rand or any other C RNG (e.g. gsl); is there a more standard way? | thread-safe random number generation with cython | 0.53705 | 0 | 0 | 698 |
27,825,854 | 2015-01-07T18:10:00.000 | 2 | 1 | 1 | 0 | python,anaconda,conda | 28,181,251 | 2 | true | 0 | 0 | The script needs to exit nonzero. If the tests fail, call sys.exit(1) in the script. | 1 | 3 | 0 | I am creating a conda recipe, and have added run_test.py . These are unittest classes.
Unfortunatly, when there are errors, the package is still created.
My question, how to inform conda that the test failed, and it should not continue with the package build.
run_test.py contains :
suit = unittest.TestLoader().discover("../tests/unitTest")#, pattern="test[AP][la]*[sr].py")
unittest.TextTestRunner(verbosity=2).run(suit )
I do add the files in meta.yaml
test:
files:
- ../tests/unittest/
This is the output:
Ran 16 tests in 2.550s
FAILED (errors=5)
===== PACKAGE-NAME-None-np18py27_0 OK ====
I want to stop the build | How to fail a conda package build when there are errors on run_test.py | 1.2 | 0 | 0 | 541 |
27,828,737 | 2015-01-07T21:11:00.000 | 1 | 0 | 0 | 0 | python-2.7,virtualenv,mysql-python,centos6,percona | 27,829,817 | 1 | true | 0 | 0 | Found the solution!
I think it was improper of my to install mysql-devel in the first place, so I went ahead and uninstalled it.
Instead, I used a packaged supplied by Percona - Percona-Server-devel-55
yum install Percona-Server-devel-55 and the problem is solved! | 1 | 1 | 0 | I've gone through many threads related to installing mysql-python in a virtualenv, including those specific to users of Percona. None have solved my problem thus far.
With Percona, it is normal to get a long error on pip install MySQL-python in the virtualenv that ultimately says EnvironmentError: mysql_config not found. One method to remedy this is yum install mysql-devel, which I've done. I can actually get mysql-python to install properly outside of the virtualenv via yum.
I'm getting the error in the virtualenv only - it uses Python 2.7.9, wheareas 2.6.6 is what comes with Centos.
Also, with MySQL-python installed via yum it will import to the OS's python interpreter, but will not import into the virtualenv's python interpreter.
To clarify, I only installed mysql-python via yum to see whether or not it would work that way. I would prefer it be by pip, in the environment only.
What am I missing here? As far as I'm aware it should work - considering it will work outside of virtualenv. | Unable to get these to cooperate: mysql-python + virtualenv + percona + centos6 | 1.2 | 1 | 0 | 110 |
27,831,245 | 2015-01-08T00:46:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,datetime,time | 27,831,476 | 2 | true | 0 | 0 | Yes that's a good way of doing it: you store "Unix epoch" style times, and convert them to whatever local time you need before displaying them. | 1 | 1 | 0 | I've used time.time() to generate timestamps across client apps. These timestamps are accumulated and sent in batches to an external and independent location.
While rendering these timestamps back on the client application I intend to use datetime.fromtimestamp(ts_from_external_source) in order to create local datetime objects, without defining the timezone so it assumes the local by default.
Is this the recommended way of doing it? | Correct usage of utc timestamps and local datetime | 1.2 | 0 | 0 | 118 |
27,832,993 | 2015-01-08T04:32:00.000 | 1 | 0 | 0 | 0 | python,flask | 27,863,220 | 1 | true | 1 | 0 | app.app_context loads the application and any extentions you have loaded.
A request context is loaded when you are dealing with a request.
A good example.
If you have a background cron that does some database work, you'll need to make use of app_context to get access to the models.
You'll be a in request context pretty much whenever you're handling a view. | 1 | 0 | 0 | I'm new to Flask but have experience with PHP. I know there are session variables and global variables just as in PHP, but what do the contexts actually mean? I read the documentation but could not understand whet it was saying.
What are the application and request contexts, and what the is app.app_context()? | What are the application and request contexts? | 1.2 | 0 | 0 | 94 |
27,834,570 | 2015-01-08T06:57:00.000 | 0 | 0 | 0 | 0 | python,django,environment-variables | 28,414,823 | 1 | true | 1 | 0 | well, it turned out that the AppConfig is not the right spot for a task like that .. i realized my loading of secrets with a modification of the projects manage.py and im planning to release an app with all of the code in the near furture :) | 1 | 0 | 0 | Id like to have my secret keys loaded via environment vars that shall be check on startup of my Django app. Im using an AppConfig for that purpose, because that code will be executed on startup.
For now i wrote a little helper to get the vars and a list of vars to check. Which is working fine.
The problem:
I also wrote a Django management command to help entering and storing the needed vars and save em to the users .profile, BUT when i have my checks in place the AppConfig will raise errors before i even have the chance to run my configuration command :(
So how do i enable that configuration management command whilst still having that env check run on startup?
For now im going to do a plain python script to not load Django at all (which i dont need for now anyways), but in case i might need to alter the database (and thus need Django for some setup task) how would i be able to sneak past my own startup check in my AppConf?
Where else might i be placing the checks?
I tried the main urls.py, but this will only be loaded once the first url lookup is needs and thus one might start the server and not see any errors and still the app will not work once the first url is entered in the browser. | How To check for environment vars during django startup | 1.2 | 0 | 0 | 170 |
27,835,528 | 2015-01-08T08:06:00.000 | 0 | 0 | 0 | 1 | python,nginx,tornado | 27,880,443 | 1 | true | 1 | 0 | Each of these ports is a different python process, right? At some point you must be passing in the port number to each process and calling app.listen(port) (or one of the related bind/listen methods). Just save the port number at that time (could just be a global variable if you only have one server per process) | 1 | 0 | 0 | Nginx make PORT 8000:8003 to Tornado server.py
I want to get the PORT number in MainHanlder and print it in the browser when someone visit my site.
And I don't want to make several copies of server.py to server8000.py, server8001.py, ..., I want just one main entrance to solve this problem.
How can I do it? | How can I get the PORT number when using Nginx + Tornado? | 1.2 | 0 | 0 | 96 |
27,840,512 | 2015-01-08T12:47:00.000 | 0 | 1 | 0 | 0 | python,pytest | 27,842,432 | 2 | false | 0 | 0 | Use fail tag
py.test -r f
PyTest will returns just the names of the failing tests, line numbers of where the fail occurred and the type of error that caused the failure. | 1 | 4 | 0 | Pytest is able to provide nice traceback errors for the failed tests but is doing this after all the tests were executed and I am interested in displaying the errors progressively.
I know that one workaround would be to make it fail fast at the first error, but I do not want this, I do want it to continue. | How can I quickly display failure details while using pytest? | 0 | 0 | 0 | 1,385 |
27,846,631 | 2015-01-08T18:02:00.000 | 1 | 0 | 1 | 0 | python,generator | 27,846,704 | 1 | false | 0 | 0 | But this generator must somehow know it's "bounds", right?
No it doesn't! It just keeps on trucking until it hits a StopIteration. It is only ever handling one element in memory at a time, then discards it. | 1 | 4 | 0 | I understand the general idea that a generator returns an iterable that 'saves state' and doesn't calculate everything at once, rather it calculates with each call to next. How does this work? For example [x for x in range(10) if x%2==0] vs (x for x in range(10) if x%2==0). In the list comprehension, everything is calculated and stored in memory at once.
In the generator, the entire list is not produced but instead an iterable generator object which calculates with each call to next. But this generator must somehow know it's "bounds", right? How does the generator know, if it isn't in the background performing all the calculations, where to pick up where it left off? I would think it has to know each step in the list comprehension and ultimately if you end up cycling through the entire generator until a StopIteration is hit, I would think you are using roughly the same amount of memory. | How do Python Generators Save Memory | 0.197375 | 0 | 0 | 1,854 |
27,850,953 | 2015-01-08T22:48:00.000 | 0 | 0 | 1 | 1 | python,dll,cython | 34,793,148 | 2 | false | 0 | 1 | Just installed PyInstaller - it has the option to compile to one file. | 1 | 2 | 0 | I had created an application in Python and then I tried to make an executable from it.
Works well in Ubuntu and Windows but only when on this system I have Python installed.
In other case (Tried only for Win) I get the error that "The application can't start because python34.dll is missing"
What I do (filename is curr.py, also I have icon.res for icon):
python C:\Python34\Scripts\cython-script.py --embed curr.py
in curr.c replace wmain by main (without doing it app won't be compiled at all)
gcc curr.c -o curr.exe -IC:\Python34\include -LC:\Python34\libs icon.res -lpython34 -mwindows --static
Of course, If I copy python34.dll to the app's folder everything is OK.
Do I have another way? | Python34.dll is missing | 0 | 0 | 0 | 13,987 |
27,853,300 | 2015-01-09T03:16:00.000 | 2 | 1 | 0 | 0 | python,python-2.7,unicode,compilation | 33,888,200 | 2 | false | 0 | 0 | We can re-compile already installed python with 4-byte Unicode or 2-byte Unicode
Full Flow
after downloading python2.7.x and extracting it.
go to directory Python2.7.x (in my case it is Python2.7.10)
2.fire command "sudo ./configure --enable-unicode=ucs4" or "sudo ./configure --enable-unicode=ucs2" which ever you want.
now you can check if it is UCS2 or UCS4 as below
1.go to terminal
2.type python and enter
now enter following commands
import sys
print sys.maxunicode
if output is 1114111 then it is UCS4
otherwise if output is 65535 then it is UCS2 | 1 | 1 | 0 | I have tried ./configure --enable-unicode and ./configure --enable-unicode=ucs4 but the command import sys; print sys.maxunicode is still 65535.
How should I fix this and compile Python with 4-byte unicode enabled? | How to recompile Python2.7 with 4-byte unicode enabled? | 0.197375 | 0 | 0 | 3,667 |
27,866,775 | 2015-01-09T18:21:00.000 | 1 | 0 | 1 | 1 | python,windows,registry | 27,867,537 | 1 | true | 0 | 0 | Fixed it be deleting python 32 bit and installing python 64 bit, works like a charm | 1 | 0 | 0 | I am stuck here trying to make a python script to access windows registry. My issue is that I get an error file not found using the following path:
"SOFTWARE\Microsoft\Windows
NT\CurrentVersion\NetworkList\Signatures\Unmanaged"
It works until NetworkList and afterwards it just returns the error of file not found. I tried using different folders in the directory NetworkList but it gives error file not found for any of them. Using ragedit I can see that the directories that I am trying to access do exist. Any ideas? | Accessing windows registry Python | 1.2 | 0 | 0 | 230 |
27,870,003 | 2015-01-09T22:05:00.000 | 0 | 0 | 1 | 1 | python,pip,sudo,osx-yosemite | 57,711,516 | 5 | false | 0 | 0 | If you altered your $PATH variable that could also cause the problem. If you think that might be the issue, check your ~/.bash_profile or ~/.bashrc | 2 | 156 | 0 | While installing pip and python I have ran into a that says:
The directory '/Users/Parthenon/Library/Logs/pi' or its parent directory is not owned by the current user and the debug log has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag.
because I now have to install using sudo.
I had python and a handful of libraries already installed on my Mac, I'm running Yosemite. I recently had to do a clean wipe and then reinstall of the OS. Now I'm getting this prompt and I'm having trouble figuring out how to change it
Before my command line was Parthenon$ now it's Philips-MBP:~ Parthenon$
I am the sole owner of this computer and this is the only account on it. This seems to be a problem when upgrading to python 3.4, nothing seems to be in the right place, virtualenv isn't going where I expect it to, etc. | pip install: Please check the permissions and owner of that directory | 0 | 0 | 0 | 189,575 |
27,870,003 | 2015-01-09T22:05:00.000 | 61 | 0 | 1 | 1 | python,pip,sudo,osx-yosemite | 39,810,683 | 5 | false | 0 | 0 | pip install --user <package name> (no sudo needed) worked for me for a very similar problem. | 2 | 156 | 0 | While installing pip and python I have ran into a that says:
The directory '/Users/Parthenon/Library/Logs/pi' or its parent directory is not owned by the current user and the debug log has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag.
because I now have to install using sudo.
I had python and a handful of libraries already installed on my Mac, I'm running Yosemite. I recently had to do a clean wipe and then reinstall of the OS. Now I'm getting this prompt and I'm having trouble figuring out how to change it
Before my command line was Parthenon$ now it's Philips-MBP:~ Parthenon$
I am the sole owner of this computer and this is the only account on it. This seems to be a problem when upgrading to python 3.4, nothing seems to be in the right place, virtualenv isn't going where I expect it to, etc. | pip install: Please check the permissions and owner of that directory | 1 | 0 | 0 | 189,575 |
27,879,952 | 2015-01-10T18:52:00.000 | 2 | 0 | 0 | 0 | django,python-2.7,pyqt4 | 27,880,120 | 1 | true | 1 | 1 | I'm not aware of any libraries to port a PyQT desktop app to a django webapp. Django certainly does nothing to enable this one way or another. I think, you'll find that you have to rewrite it for the web. Django is a great framework and depending on the complexity of your app, it might not be too difficult. If you haven't done much with web development, there is a lot to learn!
If it seemed like common sense to you that you should be able to run a desktop app as a webapp, consider this:
Almost all web communication that you likely encounter is done via HTTP. HTTP is a protocol for passing data between servers and clients (often, browsers). What this means is that any communication that takes place must be resolved into discrete chunks. Consider an example flow:
You go to google in your browser.
Your browser then hits a DNS server (or cache) that resolves the name google.com to some IP address.
Cool, now your browser makes a request to that IP address and says "get me some stuff".
Google decides to send you back a minimal amount of HTML and lots of minified JavaScript in the page.
Your browser realizes that there are some image links in the HTML and so it makes additional requests to google to get each of the images so that it can display them.
Now all the content is loaded on your browser so it starts to execute the JavaScript code, and that code needs some more data from google so it starts sending requests to google too.
This is just a small example of how fundamentally different a web application operates than how a desktop application does. On a desktop app you have the added convenience that any operation doesn't need to be "packaged up" and sent, then have an action taken, etc (unless you're using a messaging architecture, but that's relatively uncommon outside of enterprise apps). | 1 | 1 | 0 | I've designed a desktop app using PyQt GUI toolkit and now I need to embed this app on my Django website. Do I need to clone it using django's own logic or is there a way to get it up on website using some interface. Coz I need this to work on my website same way it works as desktop. Do I need to find out packages in django to remake it over the web or is there way to simplify the task?
Please help. | Cloning PyQt app in django framework | 1.2 | 0 | 0 | 1,730 |
27,880,440 | 2015-01-10T19:41:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 27,880,471 | 1 | true | 0 | 0 | That's not what backwards compatible means. You can run a 2.6 script on 2.7, but if you try it the other way round you're likely to run into problems with new features added in 2.7.
If it didn't work like that, it would be impossible ever to add new features. | 1 | 0 | 0 | I had the impression that 2.7 was backwards compatible with 2.6?
I have a python program that I need to run on a server. I have developed it on a python version 2.7.6 and the server has python version 2.6.6.
What happens is that my program stops running when I run it on the server after a few minutes. I get the message: ' No handlers could be found for logger "sickle.app" ' and then it quits. However I get this message when I run the program locally to, but the program keeps running.
The way I run the program is that I send the program and its requirements to the server. Create a virtual environment and pip install -r requirements.txt and then run the program. So everything should be similar on the server as on the development computer.
Am I doing something wrong here? | Issues with Python script developed in 2.7 running on 2.6 | 1.2 | 0 | 0 | 623 |
27,882,449 | 2015-01-10T23:53:00.000 | 0 | 1 | 1 | 0 | python,unit-testing,testing,discovery | 69,606,469 | 2 | false | 0 | 0 | Test discovery in python Unittest checks if all the python test files are modules that are importable from top-level directory of your project. | 1 | 4 | 0 | All search results return with "how-to" information rather than "what-it-is" information. I'm looking for a simple explanation of what this feature even is. | What does test discovery mean as it relates to Python unit testing? | 0 | 0 | 0 | 1,658 |
27,883,769 | 2015-01-11T03:49:00.000 | 2 | 0 | 1 | 0 | python,numpy,scipy,sparse-matrix,gil | 27,961,586 | 1 | true | 0 | 0 | They do, for Scipy versions >= 0.14.0 | 1 | 4 | 1 | Question
Do scipy.sparse functions, like csr._mul_matvec release the GIL?
Context
Python functions that wrap foreign code (like C) often release the GIL during execution, enabling parallelism with multi-threading. This is common in the numpy codebase. Is it also common in scipy.sparse? If so which operations release the GIL? If they don't release the GIL then is there a fundamental issue here why not or is it just lack of man-power? | Do scipy.sparse functions release the GIL? | 1.2 | 0 | 0 | 209 |
27,884,051 | 2015-01-11T04:47:00.000 | 0 | 1 | 1 | 0 | python,module,system | 27,884,145 | 3 | false | 0 | 0 | It's not possible to do that without path. The only thing that you can is putting all of modules that you want to use in the same directory, you don't have to put them python\lib , you can put them in a folder on your desktop for example.Then run your scripts in that folder but, always be sure starting scripts with #!/usr/bin/env python. | 2 | 0 | 0 | To avoid type in long path name, I am trying to create a folder to put all my .py file in. And I want it to be some sort of "default" folder that every time I run .py file, the system will search this folder to look for that file.
One solution i figured, is to put my .py file in those module folders like "python\lib", and I can call python -m filename.
But I do not want to make a mess in the lib folder.
Is there any other ways to do it? Thanks! | How do I run python file without path? | 0 | 0 | 0 | 5,561 |
27,884,051 | 2015-01-11T04:47:00.000 | 1 | 1 | 1 | 0 | python,module,system | 27,884,333 | 3 | false | 0 | 0 | for example: first type
sys.path.append("/home/xxx/your_python_folder/")
then you can import your own .py file | 2 | 0 | 0 | To avoid type in long path name, I am trying to create a folder to put all my .py file in. And I want it to be some sort of "default" folder that every time I run .py file, the system will search this folder to look for that file.
One solution i figured, is to put my .py file in those module folders like "python\lib", and I can call python -m filename.
But I do not want to make a mess in the lib folder.
Is there any other ways to do it? Thanks! | How do I run python file without path? | 0.066568 | 0 | 0 | 5,561 |
27,884,404 | 2015-01-11T05:54:00.000 | 279 | 1 | 0 | 0 | python,pytest | 27,899,853 | 2 | true | 0 | 0 | I'm not sure this will solve your problem, but you can pass --durations=N to print the slowest N tests after the test suite finishes.
Use --durations=0 to print all. | 1 | 185 | 0 | I am running unit tests on a CI server using py.test. Tests use external resources fetched over network. Sometimes test runner takes too long, causing test runner to be aborted. I cannot repeat the issues locally.
Is there a way to make py.test print out execution times of (slow) test, so pinning down problematic tests become easier? | Printing test execution times and pinning down slow tests with py.test | 1.2 | 0 | 0 | 37,228 |
27,884,703 | 2015-01-11T06:49:00.000 | 2 | 0 | 1 | 0 | python,python-docx | 29,425,929 | 5 | true | 0 | 0 | Support for run styles has been added in latest version of python-docx | 1 | 13 | 0 | I am using python-docx 0.7.6.
I can't seem to be able to figure out how to set font family and size for a certain paragraph.
There is .style property but style="Times New Roman" doesn't work.
Can somebody please point me to an example?
Thanks. | Set paragraph font in python-docx | 1.2 | 0 | 0 | 48,463 |
27,885,666 | 2015-01-11T09:36:00.000 | 1 | 0 | 1 | 1 | python,virtualenv,virtualenvwrapper | 27,885,868 | 1 | false | 0 | 0 | When creating virtual environment, you can specify which python to use.
For example,
virtualenv -p/usr/bin/python2.7 env
Same for mkvirtualenv | 1 | 0 | 0 | using autoenv and virtualenvwrapper in python and trying to configure in it the specific python version.
the autoenv file (called .env) contains (simply)
echo 'my_env'
is there a way to configure it's python version? | how to setup virtualenv to use a different python version in each virtual env | 0.197375 | 0 | 0 | 572 |
27,885,733 | 2015-01-11T09:46:00.000 | 1 | 0 | 0 | 0 | python,postgresql,security,csv | 27,885,848 | 2 | false | 1 | 0 | The script makes a POST request to your Django web server either with login/pwd or unique string.
The web server validates credentials and inserts data into DB. | 1 | 0 | 0 | I have a python script set up that captures game data from users while the game is being played. The end goal of this is to get all that data, from every user, into a postgresql database on my web server where it can all be collated and displayed via django
The way I see it, I have 2 options to accomplish this:
While the python script is running, I can directly open a connection to the db and upload to it in real time
During the game session, instead of uploading to the db directly, I can save out a csv file to their computer and have a separate app that will find these log files and upload them to the db at a later point
I like (1) because it means these log files cannot be tampered with by the user as it is going straight to the db - therefore we can prevent forgery and ensure valid data.
I like (2) because the initial python script is something that every user would have on their computer, which means they can open it at will (it must be this way for it to work with the game). In other words, if I went with (1) users would be exposed to the user/pass details for connecting to the db which is not secure. With (2) the app can just be an exe where you cant see the source code and cant see the db login details
My questions:
So in one case I'd be exposing login details, in the other I'd be risking end users tampering with csv files before uploading. Is there a method that could combine the pros of the 2 methods without having to deal with the cons?
At the very least, if I had to choose either of these 2 methods, whats the best way to get around its downfall? So is it possible to prevent exposing db credentials in a publicly available python script? And if I have to save out csv files, is there a way to prevent tampering or checking if it has been tampered with? | Clients uploading to database | 0.099668 | 1 | 0 | 70 |
27,894,077 | 2015-01-12T01:20:00.000 | 1 | 0 | 1 | 1 | python-2.7,ipython | 35,580,619 | 3 | false | 0 | 0 | I had this problem on Windows after I had 2 version of Python installed. Even after I uninstalled one version of Python through Control Panel, the folder C:\Python2.7 still existed. After I deleted this folder manually, ipython started work normally. | 3 | 2 | 0 | I installed pythonxy (2.7.9), but when I try to run ipython from windows powershell, I get an error "failed to create process." What steps can I take to solve this problem.
Thanks, | When I try to run ipython on powershell, I get the error "Failed to create process" | 0.066568 | 0 | 0 | 1,609 |
27,894,077 | 2015-01-12T01:20:00.000 | 1 | 0 | 1 | 1 | python-2.7,ipython | 42,178,769 | 3 | false | 0 | 0 | Go to cmd and type where iPython. if it results in two locations, delete one and proceed with other. Usually this happens when you have installed Anaconda distribution after installing python in your local system. Deleting python version locally would be a better solution. And then go to the root folder, (Ex: C:\Users\Desktop\new_folder), by holding shift, click right button and go to open command prompt here. Type iPython. This should solve your problem. | 3 | 2 | 0 | I installed pythonxy (2.7.9), but when I try to run ipython from windows powershell, I get an error "failed to create process." What steps can I take to solve this problem.
Thanks, | When I try to run ipython on powershell, I get the error "Failed to create process" | 0.066568 | 0 | 0 | 1,609 |
27,894,077 | 2015-01-12T01:20:00.000 | 0 | 0 | 1 | 1 | python-2.7,ipython | 42,316,996 | 3 | false | 0 | 0 | You can also rename folders, for example C:\Python27_ , so that they are not found. This way you don't have to delete your libraries! Then you remove the underscore depending on the project you are working in.
Make sure you have the relevant directories in the path, otherwise the version you do want won't be found at all.
Sorry of this sounds silly, but after painlessly failing to setup virtualenv in Windows, this solution comes in handy for having multiple Python installations. | 3 | 2 | 0 | I installed pythonxy (2.7.9), but when I try to run ipython from windows powershell, I get an error "failed to create process." What steps can I take to solve this problem.
Thanks, | When I try to run ipython on powershell, I get the error "Failed to create process" | 0 | 0 | 0 | 1,609 |
27,895,863 | 2015-01-12T05:32:00.000 | 1 | 0 | 1 | 1 | python,macos,python-2.7,python-3.x,pillow | 27,895,907 | 2 | true | 0 | 0 | You can use the built-in Python, there's nothing "bad" about it. If someday you run into some limitation (unlikely), you can always switch then. | 1 | 0 | 0 | I started learning python on a PC, by installing Python 3, and so when I switched to mac I thought "Look at that, you can just type idle in the terminal and there's Python 2, let me just use that," not wanting to have to deal with two versions on the same machine. However, I read that it's recommended to install a newer version of Python and not to mess up with the built-in install.
Is it a bad practice to use the built-in version? For the matter, I'm not a heavy programmer, just like to play with Python every now and then. The only extra module I've installed was Pillow. | Is it bad to use the built-in python on mac? | 1.2 | 0 | 0 | 1,327 |
27,904,399 | 2015-01-12T14:35:00.000 | 0 | 1 | 0 | 0 | python,pygame,artificial-intelligence,keypress,pyhook | 28,001,301 | 1 | false | 0 | 0 | I believe you can simply use Pygame. Simply check for key events and use pygame.time to tick during the keypress. | 1 | 0 | 0 | I am doing an AI project based on Keyboard Analytics. In part 1 of the project, I have to build a python based application which will record keyboard inputs. I have some requirements.
I require a breakdown of input. For eg., 'I is CapsLock + 'i' or Shift + 'i'.
I also want to be able to find the duration of a keypress.
I need to do this globally. Not restricted to an application.
I have considered pyHook + win32 combo. But I don't think it gives keyPress duration
I have also considered pyGame. But, it's limited to the application.
Is there any module that will help me do this? Or any way I can combine existing modules to get the job done? | Getting global KeyPress duration using Python | 0 | 0 | 0 | 193 |
27,909,442 | 2015-01-12T19:36:00.000 | 0 | 1 | 0 | 1 | python,email,osx-yosemite | 27,913,491 | 1 | true | 0 | 0 | adding this an answer cause not enough space in comments.
It might work, but highly unlikely, and if you can send outbound mail, it will most likely be spam folder'd or dropped. The reason most apps use a dedicated mail server or smart host is that there are lots of other things that need to be setup besides the mail server (DNS records, SPF, DKIM,etc..). By default it if type sendmail [email protected] on your mac, type your message and end it with a . on a line by itself you mac will try to deliver it using its internal server(postfix). It will look up the Right Hand Side, look for MX records, try to connect to port 25 on the lowest order mx, and do all the things that a mail server does when delivering email. But if your skunk work project cannot access gmail on port 465 or 587 due to firewall settings, then there is very little chance that your mail admins will allow it to connect to random servers on port 25 (since this is exactly what Direct to MX Bots/Malware do).
You best bet is to contact your admins and tell them you have an application that needs to send email, (low volume, notification type, whatever), and ask them if they have an approved server that you can smart host via.
Going around network security, even with the best of intentions, is generally a bad idea. Since the rules are generally put in place for a reason | 1 | 1 | 0 | I have set up an unused Macbook Pro as a shared server on our internal company intranet. I'm kind of running it as a skunkworks; it can access the internet, but doesn't have permission to access, say, Gmail via SMTP. It's also unlikely to get that access anytime soon.
I'd like to be able to run processes on the machine that send notification emails. Every library I can find seems to require an email server, and I can't access one.
Is there any way to set up my MBP to act as its own email server? Thanks. | How can I send mail via Python on Yosemite... without an outside server? | 1.2 | 0 | 0 | 291 |
27,912,563 | 2015-01-12T23:29:00.000 | 0 | 0 | 0 | 0 | python,django,internationalization,django-i18n | 27,913,910 | 3 | false | 1 | 0 | What you are experiencing is that your language selection is not sticking as it should and so to do this there are a few things to check.
My guess is that you have 'en' as your LANGUAGE_CODE in your settings so that will always be your fallback. Quite often the reason that you cannot get your language to stick is that you may not have the correct middleware in your settings MIDDLEWARE_CLASSES.
Try the following checklist to see if you have everything you need:
Add SessionMiddleware and LocaleMiddleware to your middleware stack in your settings, ensure the SessionMiddleware comes before LocaleMiddleware as it relies on the use of sessions and it is responsible for fetching your desired language from a request via the url/session/cookie/request header in that order I believe, then uses your fallback language if all else fails.
Ensure all languages that you want to serve are in your LANGUAGES list in settings
That should hopefully do the trick so hope that helps. | 1 | 1 | 0 | I'm using internationalization.
So, everyhting works fine, when I access http://localhost:8000/en/ and http://localhost:8000/de/
But when I access http://localhost:8000/ it redirects me to http://localhost:8000/en/ even when the last accessed page was http://localhost:8000/de/
Basically, I want to save language code, based on the page accessed, e.g. if I access http://localhost:8000/de/ then language is german. Next, when I access http://localhost:8000, it should point me to http://localhost:8000/de/, not default http://localhost:8000/en/
How this can be done? | Django i18n pattern | 0 | 0 | 0 | 436 |
27,912,757 | 2015-01-12T23:49:00.000 | 0 | 1 | 1 | 0 | python,c++,c,binary | 27,912,842 | 2 | true | 0 | 0 | The tab is represented in the ASCII chart as 0x08 0x09, or "00001000" "00001001" in a binary string.
The Enter key is different because it could represent CR (carriage return), LF (Linefeed), or both.
The CR is represented as 0x0d, or "00001101" as binary string.
The LF is represented as 0x0A, or "00001010" as binary string.
A common convention is '\t' for tab and '\r' for CR, '\n' for newline. | 1 | 0 | 0 | I'm trying to make a python program that converts text to one long binary string. The usual line of test and sentences are easy enough to convert into binary but I'm having trouble with the whitespace.
How do I put in a binary byte to represent the enter key?
Do I just put in the '/' and an 'n' strings?
I would ideally want to be able to convert an entire text file into a binary string and be able to convert it back again. Obviously, if I were to do this with a python script, the tabbing would get messed up and the program would be broken.
Would a C language be better for doing this stuff?
Obviously a C program would still function without its whitespace whereas python would not.
In short, I need to know how to represent the 'tab' and 'enter' keys in binary, and how to create a function to translate them into binary. would bin(ord('\n')) be good? | text - binary conversion | 1.2 | 0 | 0 | 310 |
27,914,930 | 2015-01-13T04:32:00.000 | 2 | 0 | 1 | 0 | python-2.7,openstack-nova | 38,696,078 | 4 | false | 0 | 0 | I also had this problem, and resolved by
sudo yum install python-devel python-pip
sudo yum -y install gcc | 1 | 3 | 0 | I'm trying to install OpenStack python novaclient using pip install python-novaclient
This task fails: netifaces.c:185:6 #error You need to add code for your platform
I have no idea what code it wants.
Does anyone understand this? | pip install python-novaclient is failing due to netifaces.c | 0.099668 | 0 | 0 | 3,363 |
27,918,853 | 2015-01-13T09:32:00.000 | 1 | 0 | 0 | 0 | python,zodb,object-oriented-database | 32,856,893 | 1 | false | 0 | 0 | In your script you would modify the object stored in ZODB to match the new format of the object as your code requires.
No, there is no easy to use migration "maker". | 1 | 2 | 0 | I can see in docs, that if I would change some fields in class then I need to write a migration script. But what exactly do I need to write in that script? Also is there a some automatic migration maker like there is in dgango? | How to make migrations in zodb? | 0.197375 | 0 | 0 | 179 |
27,929,197 | 2015-01-13T18:42:00.000 | 0 | 0 | 0 | 0 | python,mysql,encoding,urllib | 27,931,174 | 1 | false | 0 | 0 | Yes it is an encoding issue, make sure sure when you parse your data it doesn't encrypt it or turn it into byte encoded format. Those characters means your computer can't read the data that is being stored, so it isn't being stored in a data type that we can read. | 1 | 0 | 0 | I am using urllib in python 2.7 to retrieve webpages from the internet. After parsing the data and storing it on the database, i get the following symbols – and †and — and so on. I wanted to know how these symbols are generated and how to get rid of it? Is it an encoding issue? | MySQL - Strange symbols while using urllib in Python | 0 | 0 | 0 | 55 |
27,931,532 | 2015-01-13T21:07:00.000 | 0 | 0 | 1 | 0 | python-2.7,tkinter,redhat,tk | 27,931,570 | 1 | false | 0 | 0 | Try downloading Tkinter then using python 2.7 to run python setup.py install | 1 | 0 | 0 | So we are trying to deploy a python module called Tkinter in a large multi-user environment under RHEL. It seems to be installed in python 2.6 but not 2.7. We tied yum install but it seems to only do it under python 2.6. How can we deploy under 2.7? | Install Tkinter on python 2.6 and 2.7 | 0 | 0 | 0 | 1,563 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.