Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
20,897,851
2014-01-03T06:24:00.000
0
0
1
0
python,django,python-2.7,pip
33,917,949
11
false
1
0
On Windows, I had this issue with static files cropping up under pydev/eclipse with python 2.7, due to an instance of django (1.8.7) that had been installed under cygwin. This caused a conflict between windows style paths and cygwin style paths. So, unfindable static files despite all the above fixes. I removed the extra distribution (so that all packages were installed by pip under windows) and this fixed the issue.
5
18
0
I uninstalled django on my machine using pip uninstall Django. It says successfully uninstalled whereas when I see django version in python shell, it still gives the older version I installed. To remove it from python path, I deleted the django folder under /usr/local/lib/python-2.7/dist-packages/. However sudo pip search Django | more /^Django command still shows Django installed version. How do i completely remove it ?
Uninstall Django completely
0
0
0
137,290
20,897,851
2014-01-03T06:24:00.000
0
0
1
0
python,django,python-2.7,pip
31,096,784
11
false
1
0
I had to use pip3 instead of pip in order to get the right versions for the right version of python (python 3.4 instead of python 2.x) Check what you got install at: /usr/local/lib/python3.4/dist-packages Also, when you run python, you might have to write python3.4 instead of python in order to use the right version of python.
5
18
0
I uninstalled django on my machine using pip uninstall Django. It says successfully uninstalled whereas when I see django version in python shell, it still gives the older version I installed. To remove it from python path, I deleted the django folder under /usr/local/lib/python-2.7/dist-packages/. However sudo pip search Django | more /^Django command still shows Django installed version. How do i completely remove it ?
Uninstall Django completely
0
0
0
137,290
20,900,380
2014-01-03T09:26:00.000
4
1
0
0
python,pytest
20,934,950
3
false
0
0
py.test's memory usage will grow with the number of tests. Each test is collected before they are executed and for each test run a test report is stored in memory, which will be much larger for failures, so that all the information can be reported at the end. So to some extend this is expected and normal. However I have no hard numbers and have never closely investigated this. We did run out of memory on some CI hosts ourselves before but just gave them more memory to solve it instead of investigating. Currently our CI hosts have 2G of mem and run about 3500 tests in one test run, it would probably work on half of that but might involve more swapping. Pypy is also a project that manages to run a huge test suite with py.test so this should certainly be possible. If you suspect the C code to leak memory I recommend building a (small) test script which just tests the extension module API (with or without py.test) and invoke that in an infinite loop while gathering memory stats after every loop. After a few loops the memory should never increase anymore.
3
9
0
I am using py.test (version 2.4, on Windows 7) with xdist to run a number of numerical regression and interface tests for a C++ library that provides a Python interface through a C module. The number of tests has grown to ~2,000 over time, but we are running into some memory issues now. Whether using xdist or not, the memory usage of the python process running the tests seems to be ever increasing. In single-process mode we have even seen a few issues of bad allocation errors, whereas with xdist total memory usage may bring down the OS (8 processes, each using >1GB towards the end). Is this expected behaviour? Or did somebody else experience the same issue when using py.test for a large number of tests? Is there something I can do in tearDown(Class) to reduce the memory usage over time? At the moment I cannot exclude the possibility of the problem lying somewhere inside the C/C++ code, but when running some long-running program using that code through the Python interface outside of py.test, I do see relatively constant memory usage over time. I also do not see any excessive memory usage when using nose instead of py.test (we are using py.test as we need junit-xml reporting to work with multiple processes)
Py.test: excessive memory usage with large number of tests
0.26052
0
0
3,742
20,900,380
2014-01-03T09:26:00.000
1
1
0
0
python,pytest
42,722,815
3
false
0
0
We also experience similar problems. In our case we run about ~4600 test cases. We use extensively pytest fixtures and we managed to save the few MB by scoping the fixtures slightly differently (scoping several from "session" to "class" of "function"). However we dropped in test performances.
3
9
0
I am using py.test (version 2.4, on Windows 7) with xdist to run a number of numerical regression and interface tests for a C++ library that provides a Python interface through a C module. The number of tests has grown to ~2,000 over time, but we are running into some memory issues now. Whether using xdist or not, the memory usage of the python process running the tests seems to be ever increasing. In single-process mode we have even seen a few issues of bad allocation errors, whereas with xdist total memory usage may bring down the OS (8 processes, each using >1GB towards the end). Is this expected behaviour? Or did somebody else experience the same issue when using py.test for a large number of tests? Is there something I can do in tearDown(Class) to reduce the memory usage over time? At the moment I cannot exclude the possibility of the problem lying somewhere inside the C/C++ code, but when running some long-running program using that code through the Python interface outside of py.test, I do see relatively constant memory usage over time. I also do not see any excessive memory usage when using nose instead of py.test (we are using py.test as we need junit-xml reporting to work with multiple processes)
Py.test: excessive memory usage with large number of tests
0.066568
0
0
3,742
20,900,380
2014-01-03T09:26:00.000
1
1
0
0
python,pytest
70,989,275
3
false
0
0
Try using --tb=no which should prevent pytest from accumulating stacks on every failure. I have found that it's better to have your test runner run smaller instances of pytest in multiple processes, rather than one big pytest run, because of it's accumulation in memory of every error. pytest should probably accumulate test results on-disk, rather than in ram.
3
9
0
I am using py.test (version 2.4, on Windows 7) with xdist to run a number of numerical regression and interface tests for a C++ library that provides a Python interface through a C module. The number of tests has grown to ~2,000 over time, but we are running into some memory issues now. Whether using xdist or not, the memory usage of the python process running the tests seems to be ever increasing. In single-process mode we have even seen a few issues of bad allocation errors, whereas with xdist total memory usage may bring down the OS (8 processes, each using >1GB towards the end). Is this expected behaviour? Or did somebody else experience the same issue when using py.test for a large number of tests? Is there something I can do in tearDown(Class) to reduce the memory usage over time? At the moment I cannot exclude the possibility of the problem lying somewhere inside the C/C++ code, but when running some long-running program using that code through the Python interface outside of py.test, I do see relatively constant memory usage over time. I also do not see any excessive memory usage when using nose instead of py.test (we are using py.test as we need junit-xml reporting to work with multiple processes)
Py.test: excessive memory usage with large number of tests
0.066568
0
0
3,742
20,900,530
2014-01-03T09:36:00.000
1
0
0
0
javascript,html,python-2.7,pyjamas
20,901,453
1
false
1
0
There are more than just security problems. It's just not possible. You can't use the Python socket library inside the client browser. You can convert Python code to JS (probably badly) but you can't use a C based library that is probably not present on the client. You can access the browser only. You cannot reliably get the hostname of the client PC. Maybe ask another question talking about what you are trying to achieve and someone might be able to help
1
0
0
I want to execute my python code on the side even though there might be security problem How can I write with importing modules and all? I have tried using of pyjs to convert the below code to JS import socket print socket.gethostbyname_ex(socket.gethostname())[2][0] but I am not find how to do the same. Please help me how to how can convert this to JS and how to write other the python scripts and how to import modules in HTML.
How to Write Python Script in html
0.197375
0
1
152
20,901,030
2014-01-03T10:03:00.000
0
0
0
0
python,graph
20,901,191
1
false
0
0
If you have graphs with the same set of nodes (V1,...,VN) (if some nodes are unique that it does not matter, you simply ignore them as they cannot be the part of any common path) and want to find a shortest/longest path you can proceed as follows: Generate the intersection of all the graphs, that is: a graph, that has nodes (V1,...,VN) and node Vi is connected to Vj iff in all your graphs Vi is connected to Vj. If you have adjacency matrices of each graph, it is just an element-wise multiplication of this matrices Find the shortest/longest path in the resulting graph, it has a guarantee to be the shortest (between some two vertices I suppose?) /longest (common) among all of them.
1
0
1
I need a hint were to look algo ( maybe even in python ) So I have huge amount of graphs some, and I need to find common shortest and longest path for this graphs. or common parts ( shortest or longest ) Upd for more clear describing: Before analysis graphs already have connections between nodes ? so they are already like a path. And as result it's needed to have common possible path for all graphs depending on connections between nodes
Common longest and shortest path in graphs
0
0
0
798
20,909,383
2014-01-03T17:30:00.000
1
0
0
0
python,django,templates
20,910,146
2
true
1
0
Sadly enough hacking around the problem like that is the best solution available. It should be noted that the variable is called TEMPLATE_STRING_IF_INVALID btw. Alternatively I would recommend using Jinja2 together with Coffin, that makes debugging a lot easier as well and Jinja2 actually does give proper stacktraces for errors like these.
1
1
0
I'm using the Django template system for some form emails, with the templates editable by the end user. A template's render method returns the rendered content and silently passes over invalid variables and tags. Is there a reasonable way to tell if there was an error during rendering? I've considering setting settings.TEMPLATE_STRING_IF_INVALID to a unique string and then testing for the presence of this string but that would affect normal template rendering, which isn't acceptable. I've scanned Django's source code in the hope there is a "render invalid variable/tag" method I can override cleanly to raise an exception, but no such luck. Any ideas?
How to tell if there's a render error when manually rendering a Django template?
1.2
0
0
248
20,909,476
2014-01-03T17:36:00.000
0
0
1
0
windows,command-line,ipython
20,959,274
1
false
0
0
You can change that from "System Properties"- "Advanced" tab. Client the "Environment Variables" on the bottom right. Then you can choose to edit the variable path. Append iphython directory into the path and press "OK". (Be careful only to append the directory into the "path" variable, do not delete old ones.)
1
0
0
So I just switched over from the Enthought Canopy distribution to the Anaconda Distribution (had a lot of trouble installing modules in the Canopy version), and now I can't remember what, if anything I did to allow me to run iPython from any directory. I can run both Python and iPython from the install directory, but this doesn't really help me as all my scripts are stored in various other directories. I believe this is more of a Windows Command Line question, but I could be wrong. Any help is appreciated! Thanks.
I want to be able to run iPython from any directory through Command Line
0
0
0
42
20,910,653
2014-01-03T18:47:00.000
0
0
1
0
python,python-3.x,binary-data,bit-shift
20,910,711
4
false
0
0
You could convert your bytes to an integer then multiply or divide by powers of two to accomplish the shifting
1
6
0
I am trying to extract data out of a byte object. For example: From b'\x93\x4c\x00' my integer hides from bit 8 to 21. I tried to do bytes >> 3 but that isn't possible with more than one byte. I also tried to solve this with struct but the byte object must have a specific length. How can I shift the bits to the right?
How to shift bits in a 2-5 byte long bytes object in python?
0
0
0
17,510
20,911,147
2014-01-03T19:20:00.000
1
0
0
0
python,numpy
20,911,192
5
false
0
0
When people need a random seed that can be recorded, people usually use the system time as a random seed. This means your program will act differently each time it is run, but can be saved and captured. Why don't you try that out? If you don't want to do that for some reason, use the null version, numpy.random.seed(seed=None), then get a random number from it, then set the seed to that new random number.
1
11
1
I would like to choose a random seed for numpy.random and save it to a variable. I can set the seed using numpy.random.seed(seed=None) but how do you get numpy to choose a random seed and tell you what it is? Number seems to use /dev/urandom on linux by default.
Choose random seed and save it
0.039979
0
0
6,170
20,914,146
2014-01-03T22:50:00.000
0
0
0
0
python,path,pygame
20,914,612
1
false
0
1
The problem has been fixed. The current working directory was the location of the first file, not the file in need of the pictures which was inside the folder 'rungame', so that is where it looked for winp.png when no path was specified.
1
0
0
There's one file which is run with Python33, it has this line of code: `os.startfile(r'rungame\xsim.py')` That opens up a file in Python32, called xsim.py, and that works fine. However, xsim, which uses pygame modules, does not run as it would usually run. It cannot load its images when accessed this way, the first image is 'winp.png', and it is in the same folder as xsim.py (rungame). Here is the code which loads it: `pygame.display.set_icon(pygame.image.load('winp.png'))`
Pygame image loading error when accessing Python through another Python file
0
0
0
56
20,914,516
2014-01-03T23:21:00.000
1
0
1
0
python,csv,types
20,920,456
2
true
0
0
Why don't you do the straightforward approach? if all values can be parsed as integers, to column is integers otherwise, if all values can be parsed as doubles, to column is doubles otherwise, the column is all strings The reason why there is no library for this is probably because it's trivial to implement using the existing string to int and string to double conversion functions.
1
0
0
I am trying to figure out how to do some nice type inference on the columns of a CSV file. Are there any libraries that might tell me, for example, that a column contains only integers? All values are of course available in string format. I will write my own tool if nothing of this sort already exists, but it seems weird to me that such a basic task does not have a library counterpart somewhere.
Type inference of values contained in strings stored in a list
1.2
0
0
59
20,914,919
2014-01-03T23:57:00.000
0
0
1
1
python,program-flow
20,915,021
4
false
0
0
You could look for a cross reference program. There is an old program called pyxr that does this. The aim of cross reference is to let you know how classes refer to each other. Some of the IDE's also do this sort of thing.
1
2
0
I'm in the process of learning how a large (356-file), convoluted Python program is set up. Besides manually reading through and parsing the code, are there any good methods for following program flow? There are two methods which I think would be useful: Something similar to Bash's "set -x" Something that displays which file outputs each line of output Are there any methods to do the above, or any other ways that you have found useful?
Deciphering large program flow in Python
0
0
0
147
20,915,779
2014-01-04T01:43:00.000
0
0
0
0
python,django,django-models,centos6
20,915,876
1
true
1
0
I strongly recommend that you use the latest Django release, (currently 1.6.1), instead of the development version.
1
1
0
I have VPS with CENTOS 6 Python 2.7 and Django 3.0 installed. I have created a new app and corrected my system path but every time I run server this is what I get RuntimeError: App registry isn't ready yet. I do understand is already discussed in Django but information is very brief. Can someone help me overcome this issue please. Many thanks in advance.
Strange Error Django Runtime
1.2
0
0
243
20,916,549
2014-01-04T03:40:00.000
0
0
1
0
python
20,916,569
2
false
0
0
I don't want to have to maintain a list It's what you are meant to do; and you would have to use loops anyway. You're effectively asking the language to create a list for you automatically. Well, why would it? Contrary to what you might expect, you almost always will not need or want a list of every single instance of a class ever created. In fact, it's entirely possible that you don't even really want that for your current program (whether you yet realize it or not). There are all kinds of possible reasons, in practice, why you might want to create instances that are not subject to the "usual" handling.
1
2
0
I have a class called Mobilesuits and each instance of that class has an attribute called coordinates, which consists of its grid coordinates, which are in a list(x,y,z). I am trying to make a radar method which would detect how close a given vehicle is to other vehicles, but can't find a way to reference every objects coordinates simultaneously. Is there an easy way to do this in Python? I don't want to have to maintain a list of all vehicles and every time I want to perform a global change go through the whole list with a for loop, but that is the only way I can think of.
How do I reference all class instances at the same time?
0
0
0
73
20,922,242
2014-01-04T14:34:00.000
1
0
1
0
python,pdf,reportlab
20,923,227
1
true
0
0
PDF doesn't contain thumbnails so there is nothing to extract. You will have to use some 3rd library like Ghostscript or MuPDF to rasterize each page.
1
0
0
We have a crossplatfrom (Windows, Linux, OS X, App Engine) Python application using reportlab to create PDF files, which include thumbnails (as seen in Acrobat Reader). Is there a way to extract these thumbnails from the PDF file for document managment purposes (without rendering the pages themself with Ghostscript, MuPDF)?
Extracting PDF thumbnails
1.2
0
0
1,068
20,928,511
2014-01-05T00:27:00.000
0
0
0
0
android,python,google-app-engine,google-cloud-endpoints
20,942,935
1
true
1
0
I figured out the problem. When executing the command endpointscfg.py get_client_lib java \ -o . your_module.YourApi, make sure you exclude the "\". endpointscfg.py get_client_lib java -o . your_module.YourApi
1
0
0
If you are experiencing issues with an error that details Unrecognized Arguments than try this for a solution. When executing the endpointscfg.py get_client_lib java \ -o . your_module.YourApi make sure to exclude the "\". This solution worked for me and the .zip file was generated no problem. New command from the root of the python project endpointscfg.py get_client_lib java -o . your_module.YourApi
Python - Unrecognized Arguments: your_module.YourApi
1.2
0
0
91
20,931,426
2014-01-05T08:04:00.000
1
0
0
0
python,selenium,web-scraping,cloudflare
23,142,928
1
false
1
0
See, what scrapeshield does is checking if you are using a real browser, it's essentially checking your browser for certain bugs in them. Let's say that Chrome can't process an IFrame if there is a 303 error in the line at the same time, certain web browser react differently to different tests, so webdriver must not react to these causing the system to say "We got an intruder, change the page!". I might be correct, not 100% sure though... More Info on source: I found most of this information on a Defcon talk about web sniffers and preventing them from getting the proper vulnerability information on the server, he made a web browser identifier in PHP too.
1
7
0
I'm working on a webscraping project, and I am running into problems with cloudflare scrapeshield. Does anyone know how to get around it? I'm using selenium webdriver, which is getting redirected to some lightspeed page by scrapeshield. Built with python on top of firefox. Browsing normally does not cause it to redirect. Is there something that webdriver does differently from a regular browser?
Bypassing Cloudflare Scrapeshield
0.197375
0
1
4,617
20,933,214
2014-01-05T11:51:00.000
1
0
0
0
python,django,rest,tastypie
21,005,266
2
false
1
0
This sounds like something completely outside of TastyPie's wheelhouse. Why not have a single view somewhere decorated with @require_GET, if you want to control headers, and return an HttpResponse object with the desired payload as application/json? The fact that your object is a singleton and all other RESTful interactions with it are prohibited suggests that a REST library is the wrong tool for this job.
1
8
0
I'm using tastypie and I want to create a Resource for a "singleton" non-model object. For the purposes of this question, let's assume what I want the URL to represent is some system settings that exist in an ini file. What this means is that...: The fields I return for this URL will be custom created for this Resource - there is no model that contains this information. I want a single URL that will return the data, e.g. a GET request on /api/v1/settings. The returned data should return in a format that is similar to a details URL - i.e., it should not have meta and objects parts. It should just contain the fields from the settings. It should not be possible to GET a list of such object nor is it possible to perform POST, DELETE or PUT (this part I know how to do, but I'm adding this here for completeness). Optional: it should play well with tastypie-swagger for API exploration purposes. I got this to work, but I think my method is kind of ass-backwards, so I want to know what is the common wisdom here. What I tried so far is to override dehydrate and do all the work there. This requires me to override obj_get but leave it empty (which is kind of ugly) and also to remove the need for id in the details url by overriding override_urls. Is there a better way of doing this?
Creating a tastypie resource for a "singleton" non-model object
0.099668
0
0
1,156
20,933,562
2014-01-05T12:30:00.000
1
1
1
0
python,git,github
20,933,608
3
false
0
0
When you upload you files to github only what is in your git repo gets uploaded. pyc files should not have been added to your git repo anyways. If you did, remove them before pushing your repository. You can use the .gitignore files to not let pyc files show up in your git status view.
1
2
0
I have project written in python which i would like to upload to my github repo. In my source directory in laptop, there are other compiled python scripts (.pyc) residing as well which i would like to avoid uploading to github. The documentation avaiable on the internet shows uploading entire source directory to github repo. Is there a way to avoid uploading certain file type, specifically *.pyc, to github repo?
How to upload only source files to github?
0.066568
0
0
2,544
20,935,204
2014-01-05T15:13:00.000
0
1
0
1
python,python-2.6,distutils
20,959,516
1
false
0
0
Can’t you use a relative path in the .pth file? Or avoid using a .pth file at all? (They’re used for module collections that pre-date packages in Python, or horrible import hacks.)
1
0
0
I want to use disutils to make a .msi for my python library. Before installation, the user can choose the destination path of the installation. Depending on this path, I want to generate a .pth file that will contain the chosen path. For this to be possible I need to run a post-installation script that will place the .pth in the correct place. My question is: Is there a way of getting that installation path that was selected by the user, during run-time?
Get user's installation path from distutils
0
0
0
91
20,939,204
2014-01-05T21:09:00.000
1
0
1
0
python,string,python-3.x
20,939,593
2
false
0
0
It depends a lot on what you're doing with the strings. I'm not exactly sure how Python stores strings but I've done a lot of work on XEmacs (similar to GNU Emacs) and on the underlying implementation of Emacs Lisp, which is a dynamic language like Python, and I know how strings are implemented there. Strings are going to be stored as blocks of memory similar to arrays. There's not a huge issue creating large arrays in Python, so I don't think simply storing the strings this way will cause performance issues. Some things to consider though: How are you building up the string? If you build up piece-by-piece by simply appending to ever larger strings, you have an O(N^2) algorithm that will be very slow. Java handles this with a StringBuilder class. I'm not sure if there's an exact equivalent in Python but you can simply create an array with all the parts you want to join together, then join at the end using ''.join(array). Do you need to search the string? This isn't related to creating the strings but it's something to consider. Searching will in general be O(n) in the size of the string; there are speedups that make it O(n/m) where m is the size of the substring you're searching for, but that's about it. The main consideration here is whether to store one big string or a series of substrings. If you need to search all the substrings, that won't help much over searching a big string, but it's possible you might know in advance that some parts don't need to be searched. Do you need to access substrings? Again, this isn't related to creating the strings, it's something to consider. Accessing a substring by position is just a matter of indexing to the right memory location, but if you need to take large substrings, it may be inefficient, and you might be able to speed things up by storing your string as an array of substrings, and then creating a new string as another array with some of the strings shared. However, doing it this way takes work, and shouldn't be done unless it's really necessary. In sum, I think for simple cases it's fine to have large strings like this, but you should think about the sorts of operations you're going to perform and what their O(...) time is.
2
5
0
I recently discovered that a student of mine was doing an independent project in which he was using very large strings (2-4MB) as values in a dictionary. I've never had a reason to work with such large blocks of text and it got me wondering if there were performance issues associated with creating such large strings. Is there a better way of doing it than to simply create a string? I realize this question is largely context dependent, but I'm looking for generalized answers that may cover more than one possible use-case. If you were working with that much text, how would you store it in your code, and would you do anything different than if you were simply working with an ordinary string of only a few characters?
How best to store large sequences of text in Python?
0.099668
0
0
3,186
20,939,204
2014-01-05T21:09:00.000
0
0
1
0
python,string,python-3.x
20,940,717
2
false
0
0
I would say that potential issues depend on two things: how many strings of this kind are hold in memory at the same time, compared to the capacity of the memory (the RAM) ? what are the operations done on these strings ? It seems to me I've read that operations on strings in Python are very efficient, so it isn't supposed to present problem working on very long strings. But in fact it depends on the algorithm of each operation performed on a big string. This answer is rather vague, I haven't enough eperience to make more useful estimation of the problem. But the question is also very broad.
2
5
0
I recently discovered that a student of mine was doing an independent project in which he was using very large strings (2-4MB) as values in a dictionary. I've never had a reason to work with such large blocks of text and it got me wondering if there were performance issues associated with creating such large strings. Is there a better way of doing it than to simply create a string? I realize this question is largely context dependent, but I'm looking for generalized answers that may cover more than one possible use-case. If you were working with that much text, how would you store it in your code, and would you do anything different than if you were simply working with an ordinary string of only a few characters?
How best to store large sequences of text in Python?
0
0
0
3,186
20,939,299
2014-01-05T21:17:00.000
4
0
1
0
python,multithreading
56,328,888
4
false
0
0
Threading is Allowed in Python, the only problem is that the GIL will make sure that just one thread is executed at a time (no parallelism). So basically if you want to multi-thread the code to speed up calculation it won't speed it up as just one thread is executed at a time, but if you use it to interact with a database for example it will.
1
119
0
I'm slightly confused about whether multithreading works in Python or not. I know there has been a lot of questions about this and I've read many of them, but I'm still confused. I know from my own experience and have seen others post their own answers and examples here on StackOverflow that multithreading is indeed possible in Python. So why is it that everyone keep saying that Python is locked by the GIL and that only one thread can run at a time? It clearly does work. Or is there some distinction I'm not getting here? Many posters/respondents also keep mentioning that threading is limited because it does not make use of multiple cores. But I would say they are still useful because they do work simultaneously and thus get the combined workload done faster. I mean why would there even be a Python thread module otherwise? Update: Thanks for all the answers so far. The way I understand it is that multithreading will only run in parallel for some IO tasks, but can only run one at a time for CPU-bound multiple core tasks. I'm not entirely sure what this means for me in practical terms, so I'll just give an example of the kind of task I'd like to multithread. For instance, let's say I want to loop through a very long list of strings and I want to do some basic string operations on each list item. If I split up the list, send each sublist to be processed by my loop/string code in a new thread, and send the results back in a queue, will these workloads run roughly at the same time? Most importantly will this theoretically speed up the time it takes to run the script? Another example might be if I can render and save four different pictures using PIL in four different threads, and have this be faster than processing the pictures one by one after each other? I guess this speed-component is what I'm really wondering about rather than what the correct terminology is. I also know about the multiprocessing module but my main interest right now is for small-to-medium task loads (10-30 secs) and so I think multithreading will be more appropriate because subprocesses can be slow to initiate.
Does Python support multithreading? Can it speed up execution time?
0.197375
0
0
63,454
20,939,891
2014-01-05T22:13:00.000
1
0
0
0
python,tkinter
20,940,725
2
false
0
1
Technically it should be possible, by giving the scrollbar the keyboard focus and then adding some custom bindings. That's a fairly unusual thing to do. Since the scrollbars are drawn with native widgets on Windows and the Mac, it might be impossible on those platforms. What you probably want to do instead is set some bindings on the application as a whole, or on some sort of widget that typically gets focus such as a canvas or text widget. Your bindings can call the xview and yview commands and give it arguments to tell it how to scroll, which is exactly what the scrollbar does.
1
0
0
Not sure what else to call 'active'. Is it possible to have the scrollbar, once clicked on to remain 'active'? Another words once I click on the scrollbar I would like to be able to move the scrollbar with the keyboard(left/right arrow keys) or the mouse. Is this possible? If so what do I have to do to accomplish it?
Python tkinter scrollbar active state
0.099668
0
0
536
20,941,211
2014-01-06T00:46:00.000
2
1
0
1
python,operating-system,raspberry-pi,bare-metal
20,941,275
2
false
0
0
Operating systems generally use "low level" languages like c/c++/d in order to have proper access to system resources. The problems with writing one in python are first, you need something to run an interpreter below it (defeating the purpose of having the OS be written in python) and second, there aren't good ways to manage resources in python. Furthermore, you said you want it to be linux based, however, linux is written in c (for the reasons listed above and a few more) and therefore writing something in python will not be very productive. If you want to stick with python, maybe you could write a window manager for linux instead? It would be much easier than an OS and python would be a fine language for such a project.
1
0
0
I don't know much about writing operating systems, but I though this would be a good way to learn. There are tutorials for raspberry pi operating systems, but they're not linux-based or made with python. I'm just looking for a general tutorial here.
Programming a linux-based Raspberry Pi operating system with python
0.197375
0
0
548
20,941,829
2014-01-06T02:05:00.000
-1
0
0
0
python,nginx
20,941,889
1
false
1
0
The short answer is no. While using a hosting plan, so actually anything that you are doing is 'exposed to the world' since you yourself have to access it remotely, like everyone else. You have two options, the first, configure the Digital Ocean server to only accept connections from your public IP, and the second, keep using your development server locally until you are ready for primetime.
1
0
0
I am running a dedicated server on Digital Ocean. My site uses Flask on NGINX through Gunicorn. During development I plopped a search engine (solr) on a local VM (through VMWare Fusion) which happens to be running Tomcat. It could have been running any web server per my question. In my app I make all search requests to that local ip: 192.168.1.5. Now, when I install Tomcat on my server and run it you can see it publicly at mysite.com:8080. There's the old welcome screen of Tomcat for the world to see. I want my app to be able to access it locally through localhost:8080 but not show it to the world. Is this possible?
Run web server locally without exposing it publicly?
-0.197375
0
0
147
20,943,169
2014-01-06T04:47:00.000
2
0
0
0
python,flask,flask-assets
20,943,669
1
false
1
0
Flask was incorrectly identifying the location of my static folder. That was the issue. To solve it I told Flask where my static folder sits.
1
0
0
I am trying to get Flask-Assets to load my assets. My css is here: /home/myname/projects/py/myapp/myapp/static/css/lib/somecsslib.css It is by default looking in the wrong directory. I get this: No such file or directory: '/home/myname/projects/py/myapp/static/css/lib/somecsslib.css' I am initializing it normally; assets = Environment(app) I tried setting the load_path: assets.load_path = '/home/myname/projects/py/myapp/myapp/static/' When I do that I get the following error: BundleError: 'css/lib/somecsslib.css' not found in load path: /home/myname/projects/py/myapp/myapp/static/ EDIT I just found out that load_path is a list. I tried this instead: assets.load_path.append('/home/myname/projects/py/myapp/myapp/static/') I got this as a result: BuildError: [Errno 2] No such file or directory: '/css/lib/somecsslib.css'
Flask assets searching in the wrong directory
0.379949
0
0
1,175
20,945,494
2014-01-06T07:59:00.000
0
0
0
0
python,django,heroku
20,981,795
2
false
1
0
Could it be that you forgot to heroku ps:scale web=1 ? If not, could your Procfile be missing? Your Procfile should be name Procfile (no extension, capital P), and be placed in your project's root. You can check that by heroku run bash and then change in your app's directory and cat Profile. Finally, if that's already the case then could your app have failed to start and gave up? Are there any other errors in the log?
1
0
0
Programming newb, Trying to use Heroku for the first time for a Django app. After I push it to Heroku, the Dynos field is blank. I expected to see my procfile: web: python manage.py runserver 0.0.0.0:$PORT --noreload Of course, when I try to open the application on Heroku, I get: An error occurred in the application and your page could not be served. Please try again in a few moments. If you are the application owner, check your logs for details Could this be because I don't have an extension on my procfile? My Procfile should just be a file I created in my text editor, right? Here is the log: 2014-01-06T07:34:17.321925+00:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path=/ host=aqueous-dawn-4712.herokuapp.com fwd="98.232.45.58" dyno= connect= service= status=503 bytes= 2014-01-06T07:34:17.778360+00:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path=/favicon.ico host=aqueous-dawn-4712.herokuapp.com fwd="98.232.45.58" dyno= connect= service= status=503 bytes= 2014-01-06T07:35:01.608749+00:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path=/ host=aqueous-dawn-4712.herokuapp.com fwd="98.232.45.58" dyno= connect= service= status=503 bytes= 2014-01-06T07:35:01.868486+00:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path=/favicon.ico host=aqueous-dawn-4712.herokuapp.com fwd="98.232.45.58" dyno= connect= service= status=503 bytes= 2014-01-06T07:46:57.862560+00:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path=/ host=aqueous-dawn-4712.herokuapp.com fwd="98.232.45.58" dyno= connect= service= status=503 bytes= 2014-01-06T07:46:58.114270+00:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path=/favicon.ico host=aqueous-dawn-4712.herokuapp.com fwd="98.232.45.58" dyno= connect= service= status=503 bytes=
Dynos field is blank after pushing Django app to Heroku
0
0
0
853
20,945,972
2014-01-06T08:37:00.000
0
0
0
0
python,django,string,performance,replace
20,985,948
4
false
1
0
This is more like an project architecture problem. The best way of doing what i "think" you want is to: create a list of the "strings" options in a database. In the user model create a field like "chosen_string" When the user selects the option to be used (the string) you just update the user model whenever you want to use the strings , just do a query.
1
0
0
Currently I m creating quite big project and I need to implement functionality which will replace string with string provided by user. Moreover each of user can have his own custom string. I will give example for better understanding there is a string "object" and user1 want to change string "object" to "tree", in whole project (all templates etc) string "object" is replaced by "tree" My ideas are as folllow: Creating middleware which would replace strings Creating js plugin Creating blockreplace(something like blocktrans) which would replace strings only in block ( I would also need to connect it with trans) Do you have any other ideas which would be better? And which idea for you is the best option? Examples: Text in template main.html ... this object is very useful ... and every user can personalize site by his custom string user1 wants "tree" instead of "object" user2 wants "apple" user3 wants "grape" They save their settings and then when they enter main.html they see user1: this tree is very useful user2: this apple is very useful user3: this grape is very useful hope it helps
Django - replacing string in whole project
0
0
0
375
20,946,366
2014-01-06T09:03:00.000
7
0
0
0
python,django,django-admin
20,947,402
1
false
1
0
You should always use manage.py rather than django-admin.py to run any commands that depend on an existing project, as that sets up DJANGO_SETTINGS_MODULE for you.
1
4
0
I am using Django 1.5.1, and according to django documentation cleanup is deprecated in this django version and cleansessions should be used. When I try using cleansessions it states unknown command. And when I type djando-admin.py help. I don't get it listed in the commands, I instead get cleanup listed. And on using django-admin.py cleanup, I get the following error - ImproperlyConfigured: Requested setting USE_I18N, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings. Any Idea what is causing so.
Unknown command: 'clearsessions'
1
0
0
1,392
20,948,650
2014-01-06T11:05:00.000
3
0
1
1
python,cygwin
24,809,797
1
false
0
0
try os.system("cat /path/foo.wav > /dev/dsp") You need to install audio package for Cygwin first.
1
2
0
I need to play .wav files stored on my PC using Python script from Cygwin. Please advice if this is possible? If so please provide pointers etc, to Python script code which can be used from Cygwin. I am working on a 64-bit Windows 7 machine. This is what I have done so far. Downloaded and installed setup-x86_64.exe from cygwin website. Installed packages as part of Cygwin: make,gcc,g++,git,ssh,sox,python ver >= 2.7, curl,wget. Please advice on how to play .wav files using Python (version >= 2.7) from Cygwin.
To play .wav files using Python from within Cygwin
0.53705
0
0
1,371
20,948,960
2014-01-06T11:22:00.000
2
0
0
0
python-2.7,ubuntu,python-imaging-library,openerp
21,006,017
2
false
1
0
You can solve this by uninstalling PIL but it's a bit like preventing fillings by pulling out your teeth; you solve the immediate problem but... The IOError you are seeing is usually because PIL can't handle jpeg images. This happens because PIL is using hard-coded library paths. To fix (on Ubuntu 12.04) pip uninstall PIL sudo apt-get install libjpeg8-dev sudo ln -s /usr/lib/x86_64-linux-gnu/libjpeg.so /usr/lib pip install PIL Note the output at the end of the PIL install, it will tell you which image types it is now handling.
1
0
0
I have installed openERP 7 multiple times on my Ubuntu 13.04 machine. I am unable to create new user in openERP 7. When i try to create new user it shows message IOError: decoder zip not available Unable to post complete output of the Error message. I have already installed all required python packages. But have not solved it yet.
OpenERP IOError: decoder zip not available
0.197375
0
0
2,118
20,950,640
2014-01-06T13:02:00.000
0
0
1
1
python,django,virtualenv,pythonpath,manage.py
21,575,289
2
true
1
0
The problem is solved due to add a python path: add2virtualenv '/home/robert/Vadain/vadain.webservice.curtainconfig/'
1
0
0
I have a problem in virtualenv that a wrong python path is imported. The reason is that by running the command: manage.py help --pythonpath=/home/robert/Vadain/vadain.webservice.curtainconfig/ The result is right, but when I run manage.py help then I missing some imports. I searched on the internet, but nothing is helped. The last change I have done is at the end of the file virtualenvs/{account}/bin/activate added the following text: export PYTHONPATH=/home/robert/Vadain/vadain.webservice.curtainconfig But this not solving the problem, somebody else's suggestion to fix this problem?
manage.py help has different python path in virtualenv
1.2
0
0
1,129
20,951,914
2014-01-06T14:18:00.000
9
1
1
0
python,code-coverage
24,137,339
6
false
0
0
This feature doesn't exist in coverage.py. Does it help that you can sort the HTML report to move 100% files to the bottom, or files with 0 statements to the bottom? UPDATE: As of coverage.py 4.0, the --skip-covered option is available to do exactly what is requested.
1
23
0
coverage.py will include init.py in its report and show it as 0 lines, but with 100% coverage. I want to exclude all blank files from coverage report. I can't just add */__init__.py to omit as some of my __init__.py files have code.
Ignoring empty files from coverage report
1
0
0
5,229
20,954,090
2014-01-06T16:07:00.000
0
0
0
0
python,django,url,filepath,file-structure
20,954,419
2
false
1
0
I would suggest looking into forms on the Django documentation site. When the user submits the form the appropriate file structure information will be passed to your view code. The view code can then pass the new file structure information to your template. The template will create the forms and the process will start over again.
1
0
0
I am attempting to create a django based website. The goal of one part of the site is to show the contents of a database in reference to its file structure. I would like to keep the URL the same while traveling deeper into the file structure, as opposed to developing another view for each level of the file structure a person goes into. The simplest way I can think to achieve this is to pass the current directory path in the request variable but I am not sure how to do this or if it is possible since it would have to be linked in the html file, and not the view file that is written in python. If you are able to at very least point me in the right direction it would be greatly appreciated!
Keep URL while surfing data Structure in Django web app
0
0
0
163
20,958,234
2014-01-06T20:01:00.000
0
0
1
1
python,pyinstaller
46,979,196
4
false
0
0
I had this same problem after I turned my .py file to an .exe file using pyinstaller (I'm using Python 3.6). It would run fine on my computer, but when sending it to others to run, firstly the computer would try to stop it from running (understandable, but you can tell Windows that you trust it when the pop-up appears). It would then be saved to their computer. I tried to run the file and got the same pop-up you did. I figured it was their anti-virus stopping it from running, so opened the anti-virus software and added an exception for my file. After that it worked fine. Granted, it's an inconvenient way to do it, but until I learn further it works for now.
2
0
0
I am trying to use Pyinstaller to create a python 2.7 executable in windows 7. I followed all the suggestions in the manual (using pip-win and Pywin32) but once the file has been created I cannot open the application and I get the error message: "Windows cannot access the specified the specified device, path, or file. You may not have the appropriate permissions to access the item." Does anyone have any idea why this might be happening and what I can do to prevent it? Sorry if this question is a bit vague, I will try and provide more details if I can. Thanks in advance
Error opening python executable in Windows after using Pyinstaller
0
0
0
4,038
20,958,234
2014-01-06T20:01:00.000
0
0
1
1
python,pyinstaller
56,158,805
4
false
0
0
I had the same problem since today (the last days was working fine). I figured out that the problem was when I create the .exe file with --icon, if you don't create the file with the --icon should work fine.
2
0
0
I am trying to use Pyinstaller to create a python 2.7 executable in windows 7. I followed all the suggestions in the manual (using pip-win and Pywin32) but once the file has been created I cannot open the application and I get the error message: "Windows cannot access the specified the specified device, path, or file. You may not have the appropriate permissions to access the item." Does anyone have any idea why this might be happening and what I can do to prevent it? Sorry if this question is a bit vague, I will try and provide more details if I can. Thanks in advance
Error opening python executable in Windows after using Pyinstaller
0
0
0
4,038
20,960,030
2014-01-06T21:57:00.000
0
1
1
0
javascript,php,python,html
20,960,259
1
true
0
0
Yes. Koding.com, which is currently in beta, offers you free space and basically a development server, and it's much like stack overflow. You can share code snippets and work with multiple people there.
1
0
0
Is there a place that I can post code to have it looked over by others? Where they can help edit it and post suggestions on what they think would make it more efficient. You would think that I am asking about the site I am currently posting to (SO). However, I mean where people are just willing to look it over and help debug. Not where you have to have a specific question about a certain piece of your code. Back in the day it would just be a group of buddies all working on one project in the living room of someone's house where they all brought their computers over to. My friends have lost interest in programming though. So I am looking for something that can hook me up with other people so we can critique each other. Is it out there? Or do I need to build it?
Code Editing over the internet
1.2
0
0
56
20,960,667
2014-01-06T22:40:00.000
0
0
0
0
python,django
20,961,607
4
false
1
0
If admins know SQL - give them phpmyadmin with read-only privelegies.
1
0
0
I want to expose ORM control to my users. I don't mean I want them to add stuff to part of the code, I mean I want to actually allow them to write django code. I only need to allow specific models, and only allow fetching of data (not add or change anything). There'll be like a console and each line will be executed (sort of like ipython notebook), and the returned data (if it's a QuerySet object) will then be displayed in a table of some sort. This feature will only be available to my super-user admins and so won't be a security concern. What's the best way to do this (if at all possible)? update Maybe I should give some background for the intended usage here. See I have built an app that collects and saves statistical information. My users have many filters I have built for them, But they constantly ask for more and more flexibility, sometimes they need to filter on something very specific and it's only a one-time thing, so I can't just keep adding more and more features. Now my superusers know a little python, and I got the idea that maybe I can give them some sort of way to filter on their own. The idea is they will be able to save queries and name them, then adding those custom filters to a list present on the main site. The way it would work is they get a QuerySet object containing all objects, which they can filter using a list of pre-determined commands. After running the command, the server will evaluate it, look for errors or forbidden code, and only then will run it. But I'm guessing I can't just use eval() in a production server, now can I? So it there some other way?
Allow admins to control Django ORM through a view
0
0
0
76
20,960,689
2014-01-06T22:42:00.000
2
0
0
1
python,directory,zip
20,960,749
3
false
0
0
Yes, you can use inotify (e.g. using pyinotify) to get a callback whenever a new file is created. It is not available on Windows though. There might be a similar api available, but I don't know if there are python bindings for that API.
2
2
0
I have a specific folder in which I download certain .zip files. I am writing a python script to automate the unzip, upload, and deletion of files from this folder. Is there a way to automatically trigger my python script each time a zip file is downloaded to this folder? [EDIT] : i am on osx mavericks, sorry for not mentioning this from the start
trigger python script everytime a file is downloaded into a specific folder
0.132549
0
0
1,034
20,960,689
2014-01-06T22:42:00.000
1
0
0
1
python,directory,zip
20,960,761
3
false
0
0
The easiest way I can think of: Make a cronjob lets say every 1 minute, that launches a script to check the directory in question for any new zip files. If found it will trigger unziping, upload and deletion. if you don't want to create a cronjob you can always think about creating a daemon (but why bother)
2
2
0
I have a specific folder in which I download certain .zip files. I am writing a python script to automate the unzip, upload, and deletion of files from this folder. Is there a way to automatically trigger my python script each time a zip file is downloaded to this folder? [EDIT] : i am on osx mavericks, sorry for not mentioning this from the start
trigger python script everytime a file is downloaded into a specific folder
0.066568
0
0
1,034
20,961,287
2014-01-06T23:33:00.000
5
0
1
0
python,matplotlib,ipython
20,961,366
5
false
0
0
As its name implies, Pylab is a MATLAB-like front end for doing mathematics in Python. iPython has specific support for Pylab, which is invoked using the %pylab magic command.
1
78
0
I keep seeing people use %pylab in various code snippits, particularly with iPython. However, I cannot see where %pylab is mentioned anywhere in Learning Python (and the few other Python books I have) and am not really sure what it means. I'm sure the answer is simple, but can anyone enlighten me?
What is %pylab?
0.197375
0
0
69,536
20,964,906
2014-01-07T05:57:00.000
0
0
1
1
python,linux,module,kivy
20,985,021
1
true
0
0
Ultimately, I resolved this by removing kivy using apt-get, then removing the stable repository from apt-get. I then added the daily repository again just to be sure (sudo add-apt-repository ppa:kivy-team/kivy-daily, note the -daily), updated, and then installed kivy. This gave me the 1.8.0 version, which has the storage module as expected. There are some minor differences between the two versions, however, this was sufficient for me. It appears that the stable 1.7.2 version simply doesn't have the storage module in the setup.py and thus does not compile with it.
1
0
0
I am attempting to set up kivy on linux, specifically Mint 13. I have followed the instructions on the kivy site, specifically, I added the daily repository to apt, and then used apt-get to install python-kivy. I wish to use the storage module, however, upon trying to from kivy.storage.jsonstore import JsonStore, it throws an ImportError: No module named storage.jsonstore. I have checked dist-packages/kivy, and indeed, the storage directory, with the files, is there as expected. (It should be noted that this is the reason I used the daily repository; the stable version does not have the storage module for some reason.) I have previously managed to get the storage module to work on my Windows machine simply by adding the module to my kivy directory, however, it fails here, on Linux Mint. How should I proceed?
Python kivy storage module
1.2
0
0
528
20,965,764
2014-01-07T07:01:00.000
1
1
1
0
python,eclipse,pydev,python-unittest
20,967,486
1
true
0
0
I think that the problem is in the way you are constructing your tests. There are a two problems I see: If tests are failing because of poor image recognition, then surely they indicate either a bug in Sikuli, or a badly designed test. Unit tests should be predictable and repeatable, so requiring that they run several times indicates that they are not well set up. If you really do need to run the UI tests mutliple times, then this should be done in the code, not in the IDE, since you can't guarantee that they will always be run in that environment (e.g. what if you want to move to CI?). So you need something like this in your code: def test_ui_component(self): for i in range(1): # Test code here You could probably abstract the pattern out using a decorator or class inheritance if you really want to.
1
2
0
I was using Pydev on Eclipse. I understand if I have a Eclipse folder with 5 files containing unit-tests, I can run these tests once by right-clicking on the name of the folder in Eclipse, choosing "Run-As" --> "Python unit-test". This works fine for me. What would be the recommended way to run these tests for the fixed number of times? For example, if I wanted to run the 5 tests in the folder 10 times each? I would be very grateful if you could help me out. Thanks!
Running unit-tests using PyDev
1.2
0
0
795
20,967,112
2014-01-07T08:34:00.000
7
1
1
0
python,configuration,pyramid,pycharm
20,967,674
3
true
0
0
You should remove all the installed bits in Python site-packages and run python setup.py develop to create a symlink (or .egg-link) to your project in site-packages, instead of the actual installed package. This should make your changes work as usual, without running install all the time.
1
2
0
I am just starting learning Pyramid using Pycharm. I have been reading tutorials but unfortunately there don't seem to be many out there. My problem is that whenever I make a change to the source I have to run python setup.py install before I can test my changes. This step seems unnecessary and I am confused why this is the case. I am developing in Pycharm on Windows. I would like to be able to change the code, restart the server, and see my changes reflected on the site immediately (skipping the distutils step).
Pyramid - I have to run python setup.py install before changes register
1.2
0
0
1,905
20,971,073
2014-01-07T11:59:00.000
0
0
0
0
python,nlp,nltk,named-entity-recognition,pos-tagger
33,822,675
3
false
0
0
Named Entity Recognition(Stanford) is enough for your problem. Using POS tagging will not help your problem. A sufficient amount of training data for generating the NER model would give you good results. If you use the Stanford NER then it uses the CRF classifier and algorithm.
1
2
1
For text that contains company names I want to train a model that automatically tags contractors (company executing the task) and principals (company hiring the contractor). An example sentence would be: Blossom Inc. hires the consultants of Big Think to develop an outsourcing strategy. with Blossom Inc as the principal and Big Think as the contractor. My first question: Is it enough to tag only the principals and contractors in my training set or is it better to additionally use POS-tagging? In other words, either Blossom/PRINCIPAL Inc./PRINCIPAL hires/NN the/NN consultants/NN of/NN Big/CONTRACTOR Think/CONTRACTOR to/NN develop/NN an/NN outsourcing/NN strategy/NN ./. or Blossom/PRINCIPAL Inc./PRINCIPAL hires/VBZ the/DT consultants/NNS of/IN Big/CONTRACTOR Think/CONTRACTOR to/TO develop/VB an/DT outsourcing/NN strategy/NN ./. Second question: Once I have my training set, which algorithm(s) of the nltk-package is/are most promising? N-Gram Tagger, Brill Tagger, TnT Tagger, Maxent Classifier, Naive Bayes, ...? Or am I completely on the wrong track here? I am new to NLP and I just wanted to ask for advice before I invest a lot of time in tagging my training set. And my text is in German, which might add some difficulties... Thanks for any advice!
Named-entity recognition: How to tag the training set and chose the algorithm?
0
0
0
2,772
20,971,366
2014-01-07T12:14:00.000
4
0
0
1
python,google-app-engine,pyinstaller
20,971,741
1
true
1
0
HTTP doesn't support file permissions, i.e. there is no way to make downloaded file exacutable by default. If your concern is to avoid users to mess with chmod, you can serve .tar.gz archive, which is able to keep records if file is executable or not
1
2
0
I generated a Unix executable with PyInstaller. I then changed the permissions of the file using chmod +x+x+x my_file -rwxr-xr-x my_file When I serve that file from mysite.appspot.com/static/filename, I successfully download my app but the file permissions change and it can't be run as an executable anymore. -rw-r--r my_file_after_being_downloaded How can I serve my file while keeping its permissions unchanged? (note that I can confirm that manually chmod-ing this downloaded file does turn it back into a Unix executable, and hence opens with double-click.)
Serving executable file on App Engine changes file permissions
1.2
0
0
109
20,975,798
2014-01-07T15:42:00.000
1
0
1
0
python,regex
20,975,860
1
true
0
0
Not so much risky as impossible. Try using that code in a pattern. sre_constants.error: redefinition of group name 'subgpA' as group 4; was group 2
1
0
0
Is it risky to uses such kind of pattern (?P<gp1>...(?P<subgpA>...)...)|(?P<gp2>...(?P<subgpA>...)...) where I use the same name for subgroups in different first level groups in an alternative ? For the names of the first level groups, they would be all different.
About the names of groups
1.2
0
0
67
20,977,523
2014-01-07T17:01:00.000
1
0
1
0
python,bash,security
20,978,080
1
false
0
0
As nrathaus suggested you could use os.path.normpath to get "normalized" path and check for security issues
1
1
0
I'm faced with the following problem: The users have some files that need syncing so I'm writing a script that copies the encrypted files from a user's directory to a temporary directory in the server before it gets distributed in the other 5 servers. The initial copy is done by creating a folder with the user's name and putting the files there. The users are free to change usernames so if someone changes his username to something nasty the server(s) is/are owned I have to use the usernames for folder names because the script that does the syncing is using the folder username for metadata of some sort. So, is there any way to escape the usernames and make sure that everything is created under the master folder?
How to create directories in a secure way in python
0.197375
0
0
100
20,978,982
2014-01-07T18:23:00.000
3
1
0
1
python,ubuntu,cron
22,439,701
2
false
0
0
I'm sorry to say I don't have the answer, but I think I know a bit of what's going on based on an issue I'm dealing with. I'm trying to get a web application and cron script to use some code that stashes an oauth token for Google's API into a keyring using python-keyring. No matter what I do, something about the environment the web app and cron job runs in requires manual intervention to unlock the keyring. That's quite impossible when your code is running in a non-interactive session. The problem persists when trying some tricks suggested in my research, like giving the process owner a login password that matches the keyring password and setting the keyring password to an empty string. I will almost guarantee that your error stems from Gnome-Keyring trying to fire up an interactive (graphical) prompt and bombing because you can't do that from cron.
1
11
0
I'm hooking a python script up to run with cron (on Ubuntu 12.04) -- easy enough. Except for authentication. The cron script accesses a couple services, and has to provide credentials. Storing those credentials with keyring is easy as can be -- except that when the cron job actually runs, the credentials can't be retrieved. The script fails out every time. As nearly as I can tell, this has something to do with the environment cron runs in. I tracked down a set of posts which suggest that the key is having the script export DBUS_SESSION_BUS_ADDRESS. All well and good -- I can get that address and, export it, and source it from Python fairly easily -- but it simply generates a new error: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11. Setting DISPLAY=:0 has no effect. So. Has anybody figured out how to unlock gnome-keyring from Python running on a cron job on Ubuntu 12.04?
Python, Keyring, and Cron
0.291313
0
0
2,578
20,981,545
2014-01-07T20:46:00.000
4
1
0
0
javascript,php,python,ruby-on-rails,caching
20,981,669
1
true
1
0
You can load the page as a static page and then load the small amount of dynamic content using AJAX. Then you can cache the page for as long as you'd like without problems. If the amount of dynamic content or some other aspect keeps you from doing that, you still have several options to improve performance. If you're site is hit very frequently (like several times a second) you can cache the entire dynamically generated page for short intervals, such as a minute or thirty seconds. This will give you a tremendous performance improvement and will likely not be noticeable to the user, if reasonable intervals are used. For further improvements, consider caching database queries and other portions of the application, even if you do so for short intervals.
1
1
0
Having a layer of caching for static web pages is a pretty straight forward concept. On the other hand, most dynamically generated web pages in PHP, Python, Ruby, etc. use templates that are static and there's just a small portion of dynamic content. If I have a page that's hit very frequently and that's 99% static, can I still benefit from caching when that 1% of dynamic content is specific to each user that views the page? I feel as though there are two different versions of the same problem. Content that is static for a user's entire session, such as a static top bar that's shown on each and every page (e.g. top bar on a site like Facebook that may contain a user's picture and name). Can this user specific information be cached locally in Javascript to prevent needing to request this same information for each and every page load? Pages that are 99% static and that contain 1% of dynamic content that is mostly unique for a given viewer and differs from page to page (e.g. a page that only differs by indicating whether the user 'likes' some of the content on the page via a thumbs up icon. So most of the content is static except for the few 'thumbs up' icons for certain items on the page). I appreciate any insight into this.
Caching dynamic web pages (page may be 99% static but contain some dynamic content)
1.2
0
0
839
20,983,047
2014-01-07T22:13:00.000
3
1
0
0
python,redis,amazon,message-queue
29,395,022
1
false
0
0
You may have solved this by now since the question is old; I had the same issues a while back and the costs ($$) of polling SQS in my application were sizable. I migrated it to REDIS successfully without much effort. I eventually migrated the REDIS host to ElasticCache's REDIS and have been very pleased with the solution. I am able to snapshot it and scale it as needed. Hope that helps.
1
1
0
I have a collection of EC2 instances whit a process installed in them that makes use of the same SQS Queue, name it my_queue. This queue is extremely active, writing more than 250 messages a minute and deleting those 250 messages consecutively. The problem I've encounter at this point is that it is starting to be slow, thus, my system is not working properly, some processeses hang because SQS closes the connection and also writing to remotes machines. The big advantage I have for using SQS is that 1) it's very easy to use, no need to install or configure local files, 2) it's a reliable tool, since I only need a key and key_secret in order to start pushing and pulling messages. My questions are: What alternatives exist to SQS, I know of Redis, RabbittMQ, but both need local deployment, configuration and that might lead to unreliable functionality, if for example the box that is running it suddenly crashes and other boxes are not able to write messages to the queue. If I choose something like Redis, to be deployed in my in my box, is it worth it over SQS, or I should just stay with SQS and look for another solution ? Thanks
Alternative to Amazon SQS for Python
0.53705
0
0
739
20,983,858
2014-01-07T23:04:00.000
6
0
1
0
python,eclipse,out-of-memory,pydev
21,041,901
1
true
1
0
Python requires no such flag (so, not really PyDev related). Python (unlike java), will happily use all the memory you have available in your computer, so, in this case, your algorithm is really using up all the memory it can. Note that if you are running a Python which is compiled in 32 bits, the max memory you'll have for the process is 2GB. If you need more memory (and have it available in your computer), you need to use a 64-bit compiled Python (usually marked as x86_64).
1
2
0
I'm working on indexing system and I need so much of ram, as I know in java we can pass some parameter to JVM to increase the heap size, but in python I couldn't figure out it how, and every time I run my application I get MemoryError after indexing ten thousands documents.
Increase memory in Pydev using run configurations
1.2
0
0
2,095
20,984,266
2014-01-07T23:39:00.000
2
0
0
0
python,python-2.7,csv,random,pandas
20,984,538
1
false
0
0
There are many ways to implement this, but the abstract algorithm should be something like this. First, to create a new CSV that meets your second critera about each state being drawn with equal probability, draw each row as follows. 1) From the set of states, draw a state (each state is drawn with probability 1 / # of states). Let that state be s. 2) From the large CSV, draw a row from the set of rows where STATE = s. As you draw rows, keep a record of the number of rows drawn from a given state/city pair. You could do this with a dictionary. Then, each time you draw a successive row, if there are any state/city pairs equal to the cap set by the user, exclude those state/city pairs from your conditional draw in step 2 above. This will satisfy your first requirement. Does that make sense? If you get started with a bit of code that attempts to implement this, I'll happily tidy it up for you if it has any problems. If you wanted to do the "somewhat trickier" algorithm in which the probability of selecting a city decreases with each selection, you could do that without much trouble. Basically, condition on the cities within state s after you draw s, then weight according to the number of times each city in that state has been drawn (you have this information because you've been storing it to implement the first requirement). You'll have to come up with the form of the weighting function, as it isn't implied by your description. Again, if you try to code this up, I'm happy to take a look at any code you post and make suggestions.
1
0
1
I have a (large) directory CSV with columns [0:3] = Phone Number, Name, City, State. I created a random sample of 20,000 entries, but it was, of course, weighted drastically to more populated states and cities. How would I write a python code (using CSV or Pandas - I don't have linecache available) that would equally prioritize/weight each unique city and each state (individually, not as a pair), and also limit each unique city to 3 picks? TRICKIER idea: How would I write a python code such that for each random line that gets picked, it checks whether that city has been picked before. If that city has been picked before, it ignores it and picks a random line again, reducing the number of considered previous picks for that city by one. So, say that it randomly picks San Antonio, which has been picked twice before. The script ignores this pick, places it back into the list, reduces the number of currently considered previous San Antonio picks, then randomly chooses a line again. IF it picks a line from San Antonio again, then it repeats the previous process, now reducing considered San Antonio picks to 0. So it would have to pick San Antonio three times in a row to add another line from San Antonio. For future picks, it would have to pick San Antonio four times in a row, plus one for each additional pick. I don't know how well the second option would work to "scatter" my random picks - it's just an idea, and it looks like a fun way to learn more pythonese. Any other ideas along the same line of thought would be greatly appreciated. Insights into statistical sampling and sample scattering would also be welcome.
Dispersing Random Sampling in CSV through Python
0.379949
0
0
114
20,984,918
2014-01-08T00:39:00.000
3
0
0
0
python,scikit-learn,scikits
21,009,215
2
false
0
0
You can set the verbose option to a value >0. That will at least give you the results on stdout.
1
5
1
I have started a grid search for SVM parameters in a rather wide range. The most of the search space have been calculated and now I got one last process, which goes already for 100 hours. I'd like to see the results, that already have been calculated. Is there any way to do it? Thanks in advance!
How to obtain GridSearchCV partly finished results?
0.291313
0
0
849
20,985,823
2014-01-08T02:20:00.000
1
0
1
1
macos,bash,terminal,osx-mountain-lion,pythonpath
20,986,449
2
false
0
0
put the path setting in /etc/profile, it will impact to all users. put the path in your home directory ~/.profile, ~/.bashrc, ~/kshrc (depand on your shell).
1
3
0
I have a problem where my PYTHONPATH variable always has a blank value. I can fix it temporarily like this: export PYTHONPATH=$(python -c 'import sys;print ":".join(sys.path)') but is there a more permanent way to do this?
OSX Permanently Set PYTHONPATH
0.099668
0
0
3,533
20,986,631
2014-01-08T03:44:00.000
2
0
0
0
python,selenium,selenium-webdriver,automated-tests
65,731,313
21
false
1
0
insert this line driver.execute_script("window.scrollBy(0,925)", "")
1
209
0
I am currently using selenium webdriver to parse through facebook user friends page and extract all ids from the AJAX script. But I need to scroll down to get all the friends. How can I scroll down in Selenium. I am using python.
How can I scroll a web page using selenium webdriver in python?
0.019045
0
1
415,578
20,987,496
2014-01-08T05:12:00.000
1
0
0
0
python,twisted,twisted.web
20,999,393
1
true
1
0
There is no API in Twisted Web like something.set404(someResource). A NOT FOUND response is generated as the default when resource traversal reaches a point where the next child does not exist - as indicated by the next IResource.getChildWithDefault call. Depending on how your application is structured, this means you may want to have your own base class implementing IResource which creates your custom NOT FOUND resource for all of its subclasses (or, better, make a wrapper since composition is better than inheritance). If you read the implementation of twisted.web.resource.Resource.getChild you'll see where the default NOT FOUND behavior comes from and maybe get an idea of how to create your own similar behavior with different content.
1
4
0
I am quite surprised I couldn't find anything on this in my Google searching. I'm using TwistedWeb to make a simple JSON HTTP API. I'd like to customize the 404 page so it returns something in JSON rather than the default HTML. How might I do this?
TwistedWeb: Custom 404 Not Found pages
1.2
0
1
396
20,992,032
2014-01-08T09:46:00.000
22
0
1
1
python,linux,nfs
55,601,574
3
false
0
0
For those on Ubuntu the package you need to install is libkrb5-dev
1
18
0
I am trying to install pynfs on RHEL 6.4 based VM command executed is python setup.py build, but I am getting this issue, error: gssapi/gssapi.h: No such file or directory, this issue is seen when setup.py build is executed for nfs4.0 directory, Moving to nfs4.0 running build running build_py running build_ext building 'rpc.rpcsec._gssapi' extension gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/kerberos/include -I/usr/include/python2.6 -c lib/rpc/rpcsec/gssapi_wrap.c -o build/temp.linux-x86_64-2.6/lib/rpc/rpcsec/gssapi_wrap.o -Wall lib/rpc/rpcsec/gssapi_wrap.c:2521:27: error: gssapi/gssapi.h: No such file or directory lib/rpc/rpcsec/gssapi_wrap.c:2528: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘krb5oid’ lib/rpc/rpcsec/gssapi_wrap.c:2575: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘krb5oid_ptr’ lib/rpc/rpcsec/gssapi_wrap.c:2588: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘reordered_init_sec_context’ lib/rpc/rpcsec/gssapi_wrap.c:2759: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘reordered_gss_accept_sec_context’ lib/rpc/rpcsec/gssapi_wrap.c:2777: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘reordered_gss_get_mic’ lib/rpc/rpcsec/gssapi_wrap.c:2788: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘reordered_gss_wrap’ Can somebody help me resolve this issue? Also, for fedora the similar way of installation works.
pynfs: error: gssapi/gssapi.h: No such file or directory
1
0
0
15,906
20,994,285
2014-01-08T11:25:00.000
5
1
1
1
python,raspberry-pi,cross-compiling,ctype
20,994,609
1
true
0
0
Python is an interpreted bytecode language, so the actual python code does not need to be cross compiled in any way; Your shared libraries, files ending in .so are not python, however. You will need to obtain versions of those compiled for the correct architecture. It might well be that those are ordinary C extensions for python, which can be built via setuptools or other means, which works equally well on ARM as it does on i386.
1
0
0
I have Python code that works on a 32bit intel machine running Ubuntu, and I need to run this code on Raspberry Pi. Would I need some sort of cross compiling? I have 32bit .so files included in python.
Would I need to cross compile Python to ARM?
1.2
0
0
947
20,994,716
2014-01-08T11:44:00.000
4
0
1
0
python,pip,ipython,package-managers,conda
62,257,819
13
false
0
0
To answer the original question, For installing packages, PIP and Conda are different ways to accomplish the same thing. Both are standard applications to install packages. The main difference is the source of the package files. PIP/PyPI will have more "experimental" packages, or newer, less common, versions of packages Conda will typically have more well established packages or versions An important cautionary side note: If you use both sources (pip and conda) to install packages in the same environment, this may cause issues later. Recreate the environment will be more difficult Fix package incompatibilities becomes more complicated Best practice is to select one application, PIP or Conda, to install packages, and use that application to install any packages you need. However, there are many exceptions or reasons to still use pip from within a conda environment, and vice versa. For example: When there are packages you need that only exist on one, and the other doesn't have them. You need a certain version that is only available in one environment
3
964
0
I know pip is a package manager for python packages. However, I saw the installation on IPython's website use conda to install IPython. Can I use pip to install IPython? Why should I use conda as another python package manager when I already have pip? What is the difference between pip and conda?
What is the difference between pip and conda?
0.061461
0
0
443,267
20,994,716
2014-01-08T11:44:00.000
-1
0
1
0
python,pip,ipython,package-managers,conda
50,145,655
13
false
0
0
pip is for Python only conda is only for Anaconda + other scientific packages like R dependencies etc. NOT everyone needs Anaconda that already comes with Python. Anaconda is mostly for those who do Machine learning/deep learning etc. Casual Python dev won't run Anaconda on his laptop.
3
964
0
I know pip is a package manager for python packages. However, I saw the installation on IPython's website use conda to install IPython. Can I use pip to install IPython? Why should I use conda as another python package manager when I already have pip? What is the difference between pip and conda?
What is the difference between pip and conda?
-0.015383
0
0
443,267
20,994,716
2014-01-08T11:44:00.000
-1
0
1
0
python,pip,ipython,package-managers,conda
54,109,432
13
false
0
0
I may have found one further difference of a minor nature. I have my python environments under /usr rather than /home or whatever. In order to install to it, I would have to use sudo install pip. For me, the undesired side effect of sudo install pip was slightly different than what are widely reported elsewhere: after doing so, I had to run python with sudo in order to import any of the sudo-installed packages. I gave up on that and eventually found I could use sudo conda to install packages to an environment under /usr which then imported normally without needing sudo permission for python. I even used sudo conda to fix a broken pip rather than using sudo pip uninstall pip or sudo pip --upgrade install pip.
3
964
0
I know pip is a package manager for python packages. However, I saw the installation on IPython's website use conda to install IPython. Can I use pip to install IPython? Why should I use conda as another python package manager when I already have pip? What is the difference between pip and conda?
What is the difference between pip and conda?
-0.015383
0
0
443,267
20,996,193
2014-01-08T12:51:00.000
29
0
0
0
python,user-interface,qt5,pyqt5
21,359,084
3
false
0
1
Been looking for PyQt5 tutorials for some time? Look no further! You won't find many around the internet. Not really tutorials, but pretty self-explanatory basic scripts under the following path: /python/lib/site-packages/PyQt5/examples you will find about 100 examples in 30 folders ranging from beginner to advanced, covering basic windows, menus, tabs, layouts, network, OpenGL, etc.
1
65
0
I am looking for a PyQt5 tutorial. It is rather complicated to start GUI development with Python for the first time without a tutorial. I only found some PyQt4 tutorials so far, and since something changed from Qt4 to Qt5, for example the fact SIGNAL and SLOT are no more supported in Qt5, it would be nice to have specific tutorials for PyQt5. Can someone please provide a tutorial on how to start GUI development with PyQt5?
Is there a tutorial specifically for PyQt5?
1
0
0
57,823
20,998,586
2014-01-08T14:37:00.000
0
0
1
1
python
21,091,640
2
false
0
0
The File was always going to home folder.. I grabbed the absolute path and it worked perfectly for me.. and thanks to Paulo's hint about permissions of file.
1
1
0
I am trying to execute a python file located in some folder named : folder1 for this I am using : ~$ python /folder1/a.py (doesn't work) When i go to that folder and then execute everything works fine: ~/folder1$ python a.py (works) I think I am using a file write operation in code written in file a.py because of which first way of execution is not working. Please give some suggestions to fix this.
Python: File operation problems in python execution
0
0
0
73
20,999,456
2014-01-08T15:14:00.000
3
0
0
1
java,android,python,google-app-engine
20,999,802
2
true
1
0
You can really go with either, to be honest, and use whatever suits your style. When I started using App Engine, I was Java all the way. I recently switched to Python and love it too! If you have a lot of existing java dependencies, such as libraries etc. that you want to continue using, then stick with it. Otherwise, it's worth dipping your toe in the Python waters.
2
1
0
I am developing an application in Android using Google App engine and Google Compute Engine as backend . I have followed the Google's demo code in python as base for my application. Now I have question in my mind that since I am more familiar with Java then Python and also need to consider the fact that Google is supporting Python more then Java in its most of the demo codes, Should I change my GAE backend language to Java?? I should stick with Python and hope that I would come around with Python eventually. Any suggestions are appreciated. Thanks
Python or Java as Backend Language in Google App engine?
1.2
0
0
4,232
20,999,456
2014-01-08T15:14:00.000
4
0
0
1
java,android,python,google-app-engine
20,999,591
2
false
1
0
Here are some points to consider: Both Python and Java are capable languages and App Engine Services are available to a large extent in both the environments. You should use the environment that you are most comfortable with. This will help when debugging issues on the Server side. I would go with the language that I am most familiar with in case the application is critical, is on a tight deadline, etc. If you are learning the environment and have the time, it is great to look at a new language. Since you are writing an Android application that is interacting with your Server side application in App Engine, one assumes that you would be exposing this functionality over Web Services. Both Python and Java environments are capable of hosting Web Services. In fact, with Google Cloud Endpoints, you should be able to even generate client side bindings (client libraries) for Android that integrate easily.
2
1
0
I am developing an application in Android using Google App engine and Google Compute Engine as backend . I have followed the Google's demo code in python as base for my application. Now I have question in my mind that since I am more familiar with Java then Python and also need to consider the fact that Google is supporting Python more then Java in its most of the demo codes, Should I change my GAE backend language to Java?? I should stick with Python and hope that I would come around with Python eventually. Any suggestions are appreciated. Thanks
Python or Java as Backend Language in Google App engine?
0.379949
0
0
4,232
20,999,674
2014-01-08T15:23:00.000
0
0
0
0
python,reference,maya,scene
21,001,341
2
true
0
0
There's no way to interact with a Maya scene without it already in Maya. I think your method is correct. What do you mean by "match a number of them into my scene"? Do you mean you want to make multiple references, based on the size? I.E. you want to fill up a given volume using the bounding box to determine how many will be needed? It seems that could be done after making one reference as easily as not.
2
0
0
I'm trying to check the width of an object in another scene. The object in the other scene will be imported as an reference, but I need to know the width/height/depth (x/y/z bounding box) of the object in order to match a number of them into my scene according to parameters set by a script of mine. The only way I've figured so far is to reference the object into the scene, check the bounding box with the xform command and then remove the reference and then proceed as normal. That solution seems both a bit slow (for large objects) and a bit awkward.
Getting the dimensions of objects is other Maya scenes
1.2
0
0
1,167
20,999,674
2014-01-08T15:23:00.000
0
0
0
0
python,reference,maya,scene
21,004,548
2
false
0
0
There's no other way to check than opening the file. You could do an an offline batch process to collect all of the information once and save it to a database or simple file such as a CSV for faster access if speed is really an issue.
2
0
0
I'm trying to check the width of an object in another scene. The object in the other scene will be imported as an reference, but I need to know the width/height/depth (x/y/z bounding box) of the object in order to match a number of them into my scene according to parameters set by a script of mine. The only way I've figured so far is to reference the object into the scene, check the bounding box with the xform command and then remove the reference and then proceed as normal. That solution seems both a bit slow (for large objects) and a bit awkward.
Getting the dimensions of objects is other Maya scenes
0
0
0
1,167
20,999,881
2014-01-08T15:32:00.000
0
0
0
1
eclipse,python-3.x,pydev
21,000,138
1
false
0
0
This is unrelated to Eclipse and PyDev. Somewhere in your code, you catch all exceptions and turn them into such ugly lists. Stop doing that or convert the output into a single multi-line string and the output will look useful again. Alternatively, you can try to format the list line by line when you log the error.
1
0
0
I'm writing python with PyDev and Eclipse. It's great, but when my code crashes, it prints my runtime stack to the console in the ugliest of ways. It just prints out a big list and it's really hard to read. There's gotta be a way to pretty this up, to make it way easier to read, right? Can PyDev do it? Thanks! For example: 2014-01-08 10:28:04,173 [error] Traceable Error raised during rendering process... - R:\qa\examples\testcases\testcase1.xml 2014-01-08 10:28:04,175 [error] [Exception] Failed to complete request: [' File "C:\Users\me\workspace\re\src\CntlrCmdLine.py", line 1001, in run\n mainFun(self, modelXbrl, coutputFolder)\n', ' File "C:\Users\me\workspace\re\src\Filing.py", line 27, in mainFun\n filing.mainFunDriver(cube)\n', ' File "C:\Users\me\workspace\re\src\Filing.py", line 115, in mainFunDriver\n embedding.parseCommandText()\n', ' Fi le "C:\Users\me\workspace\re\src\Embedding.py", line 70, in parseCommandText\n raise Exception\n'] - Report.py 2014-01-08 10:28:04,175 [warning] Cannot process input file. - R:\qa\reExamples\gd001cabbage\cabbage-20090501.xml
PyDev: Can I make the runtime stack shown on crashes look pretty?
0
0
0
36
21,000,038
2014-01-08T15:39:00.000
1
0
0
0
python,html,regex,web-crawler
21,000,334
1
false
1
0
I think the best way to do this would be to create some sort of mapping file. The file would map the original URL on the BBC site => the path to the file on your machine. You could generate this file very easily during the process when you are scraping the links from the homepage. Then, when you want to crawl this site offline you can simply iterate over this document and visit the local file paths. Alternatively you could crawl over the original homepage and do a search for the links in the mapping file and find out what file they lead to. There are some clear downsides to this approach, the most obvious being that changing the directory structure/filenames of the downloaded pages will break your crawl...
1
1
0
I'm new to working with html pages on python. I'm trying to run the BBC site offline from my PC, and I wrote a python code for that. I've already made functions that download all html pages on the site, by going through the links found on homepage (with regex). I have all links on a local directory, but they are all called sub0,sub1,sub2. How can I edit the homepage so it would direct all links to the html pages on my directory instead of the pages online? again, the pages aren't called in their original name- so replacing the domain with a local directory won't work. I need a way to go through all links on main page and change their whole path.
Trying to download html pages to create a very simple web crawler
0.197375
0
1
162
21,000,158
2014-01-08T15:44:00.000
1
0
1
0
python,raspberry-pi,gpio
21,000,219
2
false
0
0
Sounds like an electical issue. When you don't have anything connected the voltage on the pin "floats" as the ambient charge changes. Which can seem random. To solve this ground the pin to 0.
1
0
0
I'm tinkering around with python and the raspberry pi's gpio pins. I wrote a simple program that prints the input of pin #7. When I connect the pin to the 3v3, there is a constant output of True. However, when I don't connect them there is no constant False output, but a random True/False output. This is probably not a software problem. I am currently using a vnc client so I can't post any code (no copy and paste), but it's only an endless while loop that prints RPi.GPIO.input(7). I have a B model from 2011.
gpio input not static
0.099668
0
0
427
21,003,048
2014-01-08T17:56:00.000
0
0
1
0
python,python-sphinx
69,760,454
3
false
0
0
If you don't want your script to end with a .py file extension you can also try the following. You can write your original script into a .py file and create another executable (bash script) file that just executes your .py file. That way, you can document the script (the .py) file using sphinx and still execute it through the other executable file.
1
5
0
Apologies - this is a complete n00b question: I have a couple of python scripts with no .py extension. How do I convince Sphinx that it should document that script? Example error: /home/XXX/YYYYY/development/dst/build/index.rst:25: (WARNING/2) autodoc can't import/find module 'site_scons.random', it reported error: "No module named random", please check your spelling and sys.path
python-sphinx documenting scripts with no py extension
0
0
0
615
21,006,620
2014-01-08T21:12:00.000
2
1
1
0
android,python,git,perl,android-ndk
21,015,004
1
true
0
1
Python and perl are used internally by NDK tools to make the cross-compile environment more friendly. You only need them on the host. NDK can be built for Windows, Mac, or Linux. So the git repository contains all opensource that is required to compile NDK for any of these platforms.
1
1
0
I downloaded the sources for Android NDK from the git repository, I noticed that the sources for perl and python are bundled with the other dependencies: what are this 2 interpreters for ? Does this means that I can build python for Android with the NDK ? Or that if I have a python application I can port it to Android with the NDK ?
Why do I get python and perl with the NDK sources?
1.2
0
0
155
21,013,186
2014-01-09T06:35:00.000
0
0
0
0
python,wxpython
21,225,163
1
true
0
1
I use PIL(python image library)'s mode to judge the channels
1
0
0
I need to get the channel of wx.bitmap. I use the function GetDepth. It return 24 all the time , even if I pass a 8depth gray image
How to get the channel of a image by wx.bitmap
1.2
0
0
38
21,017,853
2014-01-09T10:38:00.000
2
0
0
0
python,windows,progress-bar,progress,s3cmd
21,165,278
2
true
1
0
OK, I have found a decent workaround to that: Just navigate to C:\Python27\Scripts\s3cmd and comment out lines 1837-1845. This way we can essentially skip a windows check and print progress on the cmd. However, since it works normally, I have no clue why the authors put it there in the first place. Cheers.
1
1
0
As the title suggests, I am using the s3cmd tool to upload/download files on Amazon. However I have to use Windows Server and bring in some sort of progress reporting. The problem is that on windows, s3cmd gives me the following error: ERROR: Option --progress is not yet supported on MS Windows platform. Assuming - -no-progress. Now, I need this --progress option. Are there any workarounds for that? Or maybe some other tool? Thanks.
s3cmd tool on Windows server with progress support
1.2
1
1
701
21,019,505
2014-01-09T11:53:00.000
7
0
1
0
python,python-sphinx,sphinx-apidoc
22,141,973
4
false
1
0
I do not use sphinx-build but with make html I always do touch *.rst on my source files. Then make html can pickup changes.
1
29
0
Currently, whenever I run sphinx-build, only when there are changes to the source files are the inline docstrings picked up and used. I've tried calling sphinx-build with the -a switch but this seems to have no effect. How can I force a full rebuild of the HTML output and force autodoc execution?
Sphinx: force rebuild of html, including autodoc
1
0
0
12,474
21,020,507
2014-01-09T12:38:00.000
2
0
0
0
python,flash,security,hash,digital-signature
21,033,015
1
true
1
0
Is there another way that I can verify a file's signature on the client side, without exposing the method used to create that signature? Public key crypto. You have only a public key at the client end, and require the private key on the server side to generate a signature for it to verify. What is the attack you're trying to prevent? If you are concerned about a man-in-the-middle attack between an innocent user and your server, the sensible choice would be TLS (HTTPS). This is a pre-cooked, known-good implementation including public key cryptography. It's far preferable to rolling your own crypto, which is very easy to get wrong.
1
1
0
I'm building a Flash application to run on the web, where users can visit and create their own content in conjunction with my service (built with Python). Specifically: the user sends in some data; some transformation is performed on the server; then the finished content is sent back to the user, where it is rendered by the client app. I want to be able to prevent the client from rendering bogus content, which I can do by passing a keyed hash along with the main content, generated by the server. The client would then use the same key to hash the content once again, and confirm that the hashes/signatures match. If there's a mismatch, it can be assumed that the content is inauthentic. The problem I have is that keeping the key inside the SWF is insecure. I've considered a number of ways to obfuscate the key, but am learning that if an attacker wants it, they can get it quite easily. Once an attacker has that, they can start creating their own content to be unknowingly accepted by the client. Is there another way that I can verify a file's signature on the client side, without exposing the method used to create that signature?
Verify file authenticity from Flash client without revealing key
1.2
0
0
68
21,025,774
2014-01-09T16:27:00.000
1
0
0
0
python,numpy
21,025,867
1
true
0
0
Depends on the use case. Both possibilities exist for a reason: if Z can be a matrix but just happens to have one column, make it a column vector. If Z is always a single vector, make it 1-d unless some operation (or library) requires the other format; 1-d is usually a bit easier to work with.
1
1
1
In numpy, is it recommended to create column arrays (2d arrays) rather than 1d arrays? For example, whereas Z = np.zeros((12,)) defines a 1-dimensional list, it might also be preferable to form Z = np.zeros((12,1)).
Python numpy: column arrays (2d) or lists (1d)
1.2
0
0
133
21,026,487
2014-01-09T16:57:00.000
5
0
1
0
python,text,colors,psychopy
21,063,050
2
true
0
0
No, that isn't possible right now. There's an experimental new stimulus class called TextBox that will allow it, but you'd have to write code to use that (not available yet from the Builder interface). Or just create some tif images of your stimuli and use those?
2
3
1
I am messing around in PsychoPy now, trying to modify the Sternberg demo for a class project. I want the stimulus text—the number set—to display in a variety of colors: say, one digit is red, the next blue, the next brown, etc. A variety within the same stimulus. I can only find how to change the color of the entire set. I was wondering if I could add another variable to the spreadsheet accompanying the experiment and have the values in the cells be comma separated (red,blue,brown…). Is this possible?
Text with multiple colors in PsychoPy
1.2
0
0
1,012
21,026,487
2014-01-09T16:57:00.000
2
0
1
0
python,text,colors,psychopy
23,425,589
2
false
0
0
The current way to implement this is to have a separate text stimulus for each digit, each with the desired colour. If the text representation of the number is contained in a variable called, say, stimulusText then in the Text field for the first text component put "$stimulusText[0]" so that it contains just the first digit. In the next text component , use "$stimulusText[1]", and so on. The colour of each text component can be either fixed or vary according to separate column variables specified in a conditions file.
2
3
1
I am messing around in PsychoPy now, trying to modify the Sternberg demo for a class project. I want the stimulus text—the number set—to display in a variety of colors: say, one digit is red, the next blue, the next brown, etc. A variety within the same stimulus. I can only find how to change the color of the entire set. I was wondering if I could add another variable to the spreadsheet accompanying the experiment and have the values in the cells be comma separated (red,blue,brown…). Is this possible?
Text with multiple colors in PsychoPy
0.197375
0
0
1,012
21,028,845
2014-01-09T18:59:00.000
0
0
0
1
python,git,bash
21,030,409
1
false
1
0
java code formatter as a pre-receive hook Don't do it. You're trying to run the equivalent of git filter-branch behind your developer's back. Don't do it. Is there any other way of doing this? If you want inbound code formatted in a particular way, validate the inbound files. If any aren't done right list them and reject the push. How to get that object on a remote server? You can't fetch arbitrary objects, you can only fetch by ref (branch or tag) name. The pre-receive hook runs before any refs have been updated, so no ref names the inbound commits.
1
0
0
I have a Atlassian Stash server for git. I am looking to write a script that will run java code formatter as a pre-receive hook (before it pushes the changes to the repository). So, what I am looking to do is NOT to do the work on the stash server itself rather perform the work on another server and send the status back (0 or 1) to the Stash server. I have written the script in Python where it calls a cgi (python) script on the remote server with "ref oldrev newrev" as HTTP GET Method. Once I have the STDIN values (ref oldrev newrev) on a remote server, I created a dir, git init, git remote add origin URL, and git fetch (i even tried git pull) to get the latest contents/objects of a reporsitory in hoping to get the object that has not been pushed to the repository but its in a pre-pushed stage environment. The hash or SHA key or "newrev" key of the object that is in the pre-pushed stage: 36ac63fe7b15049c132c310e1ee153e044b236b7 Now, when I run 'git ls-tree 36ac63fe7b15049c132c310e1ee153e044b236b7 Test.java' inside the directory I created above, it gives me error. 'fatal: not a tree object' Now, My questions are: How to get that object on a remote server? What might be the git command that I run that will give me that object in that stage? Is there any other way of doing this? Does it make any sense of what I've asked above. Let me know if I am not clear and I will try to clear things up more. Thanks very much in advanced for any/all the help?
Git pre-pushed object on remote server? git ls-tree
0
0
0
146
21,030,519
2014-01-09T20:32:00.000
0
0
1
0
python,pip,swampy
26,269,979
2
false
0
0
Download Swampy package tar.gz, extract it: $ tar -zxvf [swampy-package-name] cd to Swampy folder, in command line, you type: $ sudo python setup.py install run python $ python from Swampy.TurtleWorld import * or run follow example of think python $ python polygon.py It work by python. some other python version doesn't work.
1
1
0
So I'm following the 'Think Python' PDF guide and this is my first real hurdle. I tried to follow a guide to install it but it's completely over my head. I know this is vague but if anyone could guide me through it like I'm a pensioner I'd be grateful. I'm currently using Python 2.7.6. I think I downloaded setup tools and PIP but I can't be too sure. Sorry for the openness and vagueness of this question but I'm quite stuck.
Trying to install Swampy for Python.
0
0
0
845
21,033,198
2014-01-09T23:15:00.000
0
0
0
1
python,macos,postgresql,psycopg2
21,414,139
1
false
0
0
I had the same problem when I tried to install psycopg2 via Pycharm and using Postgres93.app. The installer (when running in Pycharm) insisted it could not find the pg_config file despite the fact that pg_config is on my path and I could run pg_config and psql successfully in Terminal. For me the solution was to install a clean version of python with homebrew. Navigate to the homebrew installation of Python and run pip in the terminal (rather than with Pycharm). It seems pip running in Pycharm did not see the postgres installation on my PATH, but running pip directly in a terminal resolved the problem.
1
2
0
I am trying to install psycopg2 on Mac OS X Mavericks but it doesn't see any pg_config file. Postgres was installed via Postgres.app . I found pg_config in /Applications/Postgres.app/Contents/MacOS/bin/ and put it to setup.cfg but still can't install psycopg2. What might be wrong?
Can't install psycopg2 on Maverick
0
1
0
916
21,033,857
2014-01-10T00:10:00.000
1
0
1
0
python,list,python-2.7,dictionary
21,034,096
3
false
0
0
A couple of ideas: Set up a collections.defaultdict for your output. This is a dictionary with a default value for keys that don't yet exist (in this case, as aelfric5578 suggests, an empty list); Build a list of all the words in your file, in order; and You can use zip(lst, lst[1:]) to create pairs of consecutive list elements.
1
0
0
I am having difficulties with writing a Python program that reads from a text file, and builds a dictionary which maps each word that appears in the file to a list of all the words that immediately follow that word in the file. The list of words can be in any order and should include duplicates. For example,the key "and" might have the list ["then", "best", "after", ...] listing all the words which came after "and" in the text. Any idea would be great help.
how to write a Python program that reads from a text file, and builds a dictionary which maps each word
0.066568
0
0
363
21,034,204
2014-01-10T00:42:00.000
12
0
1
0
python,date,datetime
21,034,313
2
true
0
0
They represent different things. A datetime is a specific point in time. A date is an interval in time. It's not 00:00:00 on that day, it's the entire day. That's why you can't directly convert between them. And that's why you cannot use datetime as a substitute for date. (The fact that the same timedelta type is used for both is probably a flaw in the module, which has been discussed a few times, but I doubt that it will ever be corrected.)
1
7
0
In Python, it's my impression that datetime.date is basically just a subset of datetime.datetime, with fewer features and slightly less overhead. I'd like to never use datetime.date again for the following reasons: No more conversions between the two types! datetime.date objects are always timezone-unaware, for obvious reasons. This makes it harder to generalize timezone handling across an entire application, if sometimes you're using datetime.date and sometimes you're using datetime.datetime On multiple occasions I've encountered headaches accidentally comparing datetime.date objects and datetime.datetime objects. Always using only one type makes comparisons simpler. Formatting differences between datetime.date and datetime.datetime should be a formatting issue only. Carrying the difference further down into the underlying class introduces an unnecessary complexity to the language. If I'm on the right track about this, then datetime.date will eventually be deprecated in future releases of Python, just like unicode and long have been. Adopting this convention now puts me ahead of the curve. It seems to me that the only compelling argument for continuing to use datetime.date is small amount of extra memory and storage involved in datetime.datetime. Let's say that I don't care about this extra overhead. Are there any other very compelling reasons to keep using this class? I don't want to switch to this convention and then regret it later because I missed something important.
Can I just stop using datetime.date entirely?
1.2
0
0
575
21,037,984
2014-01-10T06:55:00.000
0
0
0
0
python,file,date,comparison,datemodified
21,039,099
2
false
0
0
You could have come across a problem with different time stamps on different file system types. Since you post so little information on these, I have to take a wild guess. The mechanism I'm thinking about is this: Your original file system of type A (e. g. ext3fs, reiserfs, ntfs, ...) might contain time stamps for each file which have a precision of milliseconds. The backup file system (e. g. fat32, ...) might have a different precision for the time stamps (e. g. only seconds). During creation of the backup the system will have to decide how to handle that. The millisecond information must be lost, and maybe the value gets rounded, so a 12:23:34.789 might be rounded to a 12:23:35. (This of course should apply to around 50% of the files.) When comparing file times, depending on the cleverness of the routines, this result might be interpreted as "the backup is newer than the original". As I said, this is just a wild guess, so you should have a look at the concrete time stamps to find out.
1
2
0
I use a Python backup script for my files and I back up from my hard drive to both pen drives that are detached from my PC and permanently attached external drives. I have logic in my script that does a copy from source to destination only if the source file is newer. If the destination file is newer, I just report an error and don't do any copy. This works well for the permanently attached external drives. But for the the pen drives, for most of the files, the destination file is reported as being newer than the source file. I use my pen drives for backups only and never for anything else. So it is impossible for files on the pen drives to be newer. What could be the problem? Thank you, Vishy
Python - File modification time comparison, strange behavior
0
0
0
196
21,041,787
2014-01-10T10:24:00.000
2
0
1
0
python,urlencode
21,041,834
2
false
0
0
urllib.unquote will do the trick
1
8
0
Is there a way to urlencode/urldecode a string in Python? I want to emphasize that I want a single string, not a dict. I know of the function that does that to a dict. I could easily construct out of it a function that does that for a string, but I'd rather not have to construct my own function. Does Python have this functionality built-in? (Answers for both Python 2.7 and 3.3 would be appreciated.)
URL-encoding and -decoding a string in Python
0.197375
0
1
4,761
21,045,893
2014-01-10T13:42:00.000
0
0
1
0
java,python,jython,jython-2.5
21,045,962
1
true
0
0
PEP8 is just a code style specification, albeit a very good one. You can use it on any Python implementation. It is widely used, but that doesn't mean your code won't work without it!
1
0
0
While browsing the web,i decided to pay the jython project a visit and read this news JyNI is a compatibility layer with the goal to enable Jython to use native CPython extensions like NumPy or SciPy. This way we aim to enable scientific Python code to run on Jython. ... Our philosophy is to integrate JyNI with Jython and CPython extensions as seamless as possible. So JyNI aims to work without any recompilation of Jython or the desired CPython extensions. What it means is we can use NumPy or SciPy in jython.I have worked with python before but i lean more towards java. Do python's pep 8 rules apply in jython?.
Does jython have pep8 rules like python?
1.2
0
0
793
21,046,136
2014-01-10T13:55:00.000
0
0
0
0
python,web2py
21,050,586
1
false
1
0
fake_migrate_all doesn't do any actual migration (hence the "fake") -- it just makes sure the metadata in the .table files matches the current set of table definitions (and therefore the actual database, assuming the table definitions in fact match the database). If you want to do an actual migration of the database, then you need to make sure you do not have migrate_enabled=False in the call to DAL(), nor migrate=False in the relevant db.define_table() calls. Unless you explicitly set those to false, migrations are enabled by default. Always a good idea to back up your database before doing a migration.
1
0
0
Recently i m working on web2py postgresql i made few changes in my table added new fields with fake_migration_all = true it does updated my .table file but the two new added fields were not able to be altered in postgres database table and i also tried fake_migration_all = false and also deleted mu .table file but still it didnt help to alter my table does able two add fields in datatable Any better solution available so that i should not drop my data table and fields should also be altered/added in my table so my data shouldn't be loast
Web2py postgreSQL database
0
1
0
535
21,046,422
2014-01-10T14:10:00.000
5
0
0
0
python,c++,bootstrapping,quantlib
21,050,187
1
false
0
0
PiecewiseYieldCurve is a class template, so it can't be exported to Python as such. By default, we're exporting to Python a particular instantiation of it; it's exported as PiecewiseFlatForward and it correspond to PiecewiseYieldCurve<ForwardRate,BackwardFlat>. If you need another instantiation, you can edit QuantLib-SWIG/SWIG/piecewiseyieldcurve.i, add it (it you look at the end of the file, you'll find a few examples of how to do it) and regenerate and recompile the Python wrappers. Finally, an example of bootstrap is available in QuantLib-SWIG/Python/examples/swap.py.
1
3
1
I want to bootstrap a yield curve in Python using QuantLib library. I know that when doing bootstrapping using C++, there is a function for bootstrapping called PiecewiseYieldCurve in QuantLiab, but when I am using Python, there is no such function in Python QuantLib. I wonder that if in Python QuantLib there is an alias of PiecewiseYieldCurve, so I have to call the alias function name in order to use PiecewiseYieldCurve function Should I creating my own function to bootstrap the yield curve? Thanks!
Bootstrapping using Quantlib Python
0.761594
0
0
1,387
21,047,039
2014-01-10T14:37:00.000
2
0
0
0
python,wxpython
21,067,422
2
false
0
1
Take a look at wx.lib.resizewidget and the ResizeWidget sample in the demo.
1
0
0
The question is pretty much in the title, not much to add here. I'm trying to allow the user to dynamically size multi-line text controls just as they can re-size the panel (I set the default, but they are free to change it). Can this be done?
Is there a way to allow the user to re size a text control in wxPython?
0.197375
0
0
119
21,048,174
2014-01-10T15:30:00.000
2
0
1
0
ipython-notebook
27,841,611
7
false
0
0
I did that with jquery. You need to "print preview" your notebook. from the browser console: jQuery(".input").hide()
1
51
0
Now ipython notebook could easily hide the output part of a cell by double clicking the left margin of the output. But I havn't figure out a way to hide the whole cell content.
Is there a way to convient fold/unfold an ipython cell?
0.057081
0
0
25,882
21,050,082
2014-01-10T16:58:00.000
4
0
1
0
python,list
21,050,114
3
false
0
0
They did implement push, but they split the functionality into list.insert() and list.append() instead. list.append() is the equivalent of pushing a value onto the end. list.insert() is the inverse of list.pop() with an index; inserting the given value at the given index. The list.pop() method is an alternative to del listobject[index] in that it returns the value at the index you are removing. Note that Python lists are not limited to being used as a stack or queue, and list.pop() is a later addition to the list type to make removing-and-returning more efficient.
2
0
0
I understand how it works but I don't see the purpose of it in an actual program. And if they did implement the pop() method why would they not implement a push() too?
What is the point of using the pop() on list?
0.26052
0
0
1,367
21,050,082
2014-01-10T16:58:00.000
4
0
1
0
python,list
21,050,103
3
false
0
0
What about append()? That's the equivalent to push. The whole purpose is a quick way to use a list as a stack when convenient. It can also be used as a queue with the combination of methods pop(0) and append() . Although for this specific cases the best choice is deque from collections.
2
0
0
I understand how it works but I don't see the purpose of it in an actual program. And if they did implement the pop() method why would they not implement a push() too?
What is the point of using the pop() on list?
0.26052
0
0
1,367
21,051,059
2014-01-10T17:50:00.000
0
0
1
0
python,algorithm,if-statement
21,051,197
5
false
0
0
You sort your "compare list", then traverse it to extract runs of consecutive integers as separate intervals. For intervals of length 1 (i.e. single numbers) you perform an == test and for larger intervals you can perform the chained =</>= comparisons.
2
0
0
Lets say we have an if statement in python of form: if a == 1 or a == 2 or a == 3 ... a == 100000 for a large number of comparisons, all connected with an or. What would a good algorithm be to compress that into a smaller if statement? eg for the above if a >= 1 and a <= 100000 Sometimes there will be a pattern in the numbers and sometimes they will be completely random so the algorithm must deal well with both cases. Can anyone suggest a decent algorithm that will efficiently condense an if statement of this form? Edit: The goal is to have the resulting if statement be as short as possible. The efficiency of evaluating the if statement is secondary to length.
Condensing a long if statement to a short one automatically
0
0
0
167
21,051,059
2014-01-10T17:50:00.000
0
0
1
0
python,algorithm,if-statement
21,051,276
5
false
0
0
Maintain a sorted array of numbers to compare and perform binary search on it whenever you want to check for a . If a exists in array then the statement is true else false. It will be O(logn) for each if query
2
0
0
Lets say we have an if statement in python of form: if a == 1 or a == 2 or a == 3 ... a == 100000 for a large number of comparisons, all connected with an or. What would a good algorithm be to compress that into a smaller if statement? eg for the above if a >= 1 and a <= 100000 Sometimes there will be a pattern in the numbers and sometimes they will be completely random so the algorithm must deal well with both cases. Can anyone suggest a decent algorithm that will efficiently condense an if statement of this form? Edit: The goal is to have the resulting if statement be as short as possible. The efficiency of evaluating the if statement is secondary to length.
Condensing a long if statement to a short one automatically
0
0
0
167
21,051,731
2014-01-10T18:29:00.000
0
0
0
0
python,django,deployment,amazon-s3
21,052,730
1
false
1
0
It's probably impossible to give an accurate assessment with the limited info on your setup. If your css files are working what folder are they sitting in on your server? Why not have images folder in the same directory and set that directory to your MEDIA_URL in your settings.py file? In your browser check your images full path and compare that to your CSS files, where are they pointing, do you have a directory on your server where they are supposed to be? are you receiving an access denied if you try to directly put in that image url into your browser?
1
0
0
Most of my static files on my newly deployed Django website are working (CSS), but not the images. All the images are broken links for some reason and I cannot figure out why. I am serving my static files via Amazon AWS S3. I believe all my settings are configured correctly as the collectstatic command works (and the css styling sheets are up on the web). What could be the problem?
Deploying Django - Most static files are appearing except images
0
0
0
106