Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
32,565,759 | 2015-09-14T13:12:00.000 | 0 | 0 | 1 | 1 | android,python,android-ndk | 52,265,683 | 1 | false | 0 | 0 | The OP probably doesn't need this any more, but I had the exact same problem, trying to set up a Makefile to build a project, so maybe this will be helpful to someone else in the future as well.
ndk-build is a wrapper around gnu make, that invokes a bunch of Makefiles in build/core directory of the ndk, so, while it's not universally applicable*, for your personal project you can modify those Makefiles to do whatever you want. I found a clean-installed-binaries target that a couple of build/install targets depended on, removing those dependencies fixed the issue with perpetual installs.
In whichever cases that clean target is necessary you can invoke it manually with:
ndk-build clean-installed-binaries.
*Given the time to come up with a clean opt-in solution you can submit a patch to ndk project, and if accepted it will eventually become universally applicable. | 1 | 3 | 0 | I'm using the Native Development Kit (NDK) in a project of mine, and I'm trying to automate the whole app build procedure with Python.
Whenever ndk-build is called, it copies the prebuilt shared libraries to libs/<abi>/, even if there's no changes in them or they already exist there. This causes problem when I call ant later on, as it detects changed files (the library timestamps are newer) and so rebuilds the apk without any need.
Is there a way to change the ndk-build behaviour so it checks for existing libraries in the libs/<abi>/ folder and if they need updating or some are missing, it will call ndk-build, otherwise, just proceed to the next build step?
I've tried using filecmp in Python, but as the timestamps are different between the prebuilt shared libraries and the installed ones, it doesn't work. | ndk-build installs libraries even if no change. Can this be changed? | 0 | 0 | 0 | 100 |
32,570,491 | 2015-09-14T17:28:00.000 | 5 | 0 | 0 | 0 | python,google-sheets,gspread | 56,837,998 | 1 | true | 0 | 1 | The question is a bit old, but today I had to check this for myself, so this may be useful for people that arrive here.
Insert an image: I'm not sure if inserting images as objects (as in the Google Sheets UI) is possible, but one way of doing it is setting the content of the cell to =IMAGE("<url of image>"). I believe the image here needs to be a public link, so a direct link to a protected Google Drive resource wouldn't work (even if the user has permission to access it).
Formatting cells: Now there is package gspread-formatting to do this, but it could be also done calling the v4 of spreadsheets API. | 1 | 9 | 0 | Is it possible with gspread or other Python-based access to Google Spreadsheets to insert an image into a spreadsheet?
Also, is it possible to make rich text cells (e.g., bold, italic, different fontsize, colors, etc.)? | Does the Google Spreadsheet Python API or gspread allow images or rich text? | 1.2 | 0 | 0 | 1,698 |
32,571,328 | 2015-09-14T18:17:00.000 | -1 | 1 | 0 | 0 | python,batch-processing,maya | 32,574,618 | 1 | false | 0 | 0 | /your/maya_install_path/plug-ins/xgen/scripts/xgenm/ui/xgDescriptionEditor.py take a look at this file, they have a function called exportPatches line num around 1800+. You can see there they exporting alambic and xgmPatchInfo. Basically you can mimic same function in your script. | 1 | 0 | 0 | Trying to find the python code for XGen: Export patches for Batch render options in Maya. I couldn't find anything via Maya's Script Editor activity (also tried enabling echo all commands) but nothing shows up when I hit the button under Xgen window>File>Export Patches for Batch Render.
Thanks!! | Python code for XGen: Export Patches for Batch Render | -0.197375 | 0 | 0 | 1,232 |
32,572,114 | 2015-09-14T19:07:00.000 | 0 | 0 | 0 | 0 | java,c#,python,c++,swig | 32,615,679 | 1 | false | 0 | 1 | As you are using c++, than you can try with std::wstring as it has typemaps for all: C#, Python and Java. It is in std_wstring.i | 1 | 2 | 0 | I use %include wchar.i in C# and it seems to work correctly for all wchar_t values and arrays mapping to C#'s string. Swig's library for Python also contains the typemaps for wchar_t in wchar.i file.
Java's library doesn't have wchar.i. What's the reason for that? And also how I can achieve type mapping from wchar_t types in C++ to String in Java? | SWIG: wchar_t support for Java | 0 | 0 | 0 | 426 |
32,573,259 | 2015-09-14T20:21:00.000 | 1 | 0 | 1 | 0 | pycharm,ipython-notebook | 44,016,080 | 4 | false | 0 | 0 | if you use netbeans type F6
Or Run>Run All Cells | 2 | 17 | 0 | It might be a dull question, but I really did not find the answer for it.
I'm trying to migrate from browser edition to PyCharm of a IPython notebook. Thing is I can't find a "Run All" equivalent of the browser version. Do I have to run cell by cell everytime ?
thanks ! | How to "Run ALL" in IPython Notebook in PyCharm? | 0.049958 | 0 | 0 | 2,432 |
32,573,259 | 2015-09-14T20:21:00.000 | 4 | 0 | 1 | 0 | pycharm,ipython-notebook | 42,926,906 | 4 | false | 0 | 0 | Wondered the same (ages later). Just found out that in my PyCharm CE 2016.3 I can do shift + F10 to run all cells. | 2 | 17 | 0 | It might be a dull question, but I really did not find the answer for it.
I'm trying to migrate from browser edition to PyCharm of a IPython notebook. Thing is I can't find a "Run All" equivalent of the browser version. Do I have to run cell by cell everytime ?
thanks ! | How to "Run ALL" in IPython Notebook in PyCharm? | 0.197375 | 0 | 0 | 2,432 |
32,573,995 | 2015-09-14T21:09:00.000 | 1 | 0 | 0 | 1 | python,apache-spark,ipython,ibm-cloud,jupyter | 32,574,697 | 1 | true | 0 | 0 | You cannot add 3rd party libraries at this point in the beta. This will most certainly be coming later in the beta as it's a popular requirement ;-) | 1 | 2 | 1 | I'm currently playing around with the Apache Spark Service in IBM Bluemix. There is a quick start composite application (Boilerplate) consisting of the Spark Service itself, an OpenStack Swift service for the data and an IPython/Jupyter Notebook.
I want to add some 3rd party libraries to the system and I'm wondering how this could be achieved. Using an python import statement doesn't really help since the libraries are then expected to be located on the SparkWorker nodes.
Is there a ways of loading python libraries in Spark from an external source during job runtime (e.g. a Swift or ftp source)?
thanks a lot! | How can I reference libraries for ApacheSpark using IPython Notebook only? | 1.2 | 0 | 0 | 157 |
32,576,326 | 2015-09-15T01:35:00.000 | 2 | 0 | 0 | 0 | python,postgresql,python-3.x,centos,centos7 | 32,576,369 | 1 | false | 0 | 0 | You have misspelled the name of the library. The correct name is psycopg2 | 1 | 0 | 0 | Has anyone installed psycopg2 for python 3 on Centos 7? I'm sure it's possible, but when I run:
pip install psycopg2
I get:
Could not find a version that satisfies the requirement pyscopg2 (from versions: )
No matching distribution found for pyscopg2 | psycopg2 for python3 on Centos 7 | 0.379949 | 1 | 0 | 1,254 |
32,578,106 | 2015-09-15T05:06:00.000 | 24 | 0 | 0 | 1 | macos,python-2.7 | 32,578,175 | 1 | false | 0 | 0 | If you install Python using brew, the relevant headers are already installed for you.
In other words, you don't need python-devel. | 1 | 27 | 0 | brew and port does not provide python-devel.
How can I install it in Mac OS.
Is there an equivalent in Mac OS? | how to install python-devel in Mac OS? | 1 | 0 | 0 | 59,901 |
32,580,892 | 2015-09-15T08:08:00.000 | 0 | 0 | 1 | 0 | message-queue,apache-kafka,messagebroker,kafka-python | 32,581,181 | 2 | false | 0 | 0 | I would recommend to separate (partition) your data into multiple partitions within the same topic.
I assume the data logically belongs together (for example a stream of click events).
The advantage of partitioning your data using multiple partitions within the same topic is mainly that all Kafka APIs are implemented to be used like this.
Splitting your data into the topics would probably lead to much more code in the producer and consumer implementations. | 2 | 0 | 0 | In order to seperate my data, based on a key: Should I use multiple topics or multiple partitions within same topic? I'm asking on basis of overheads, computation, data storage and load caused on server. | Parallelism at Kafka Topics or Partitions Level | 0 | 0 | 0 | 2,226 |
32,580,892 | 2015-09-15T08:08:00.000 | 0 | 0 | 1 | 0 | message-queue,apache-kafka,messagebroker,kafka-python | 32,601,162 | 2 | false | 0 | 0 | As suggested by @rmetzger splitting records into multiple topic would increase the complexity at the producer level however there might be some other factors worth considering.
In Kafka the main level of parallelism is number of partitions in a topic, because having so you can spawn that many number of consumer instance to keep reading data from the same topic in parallel.
E.g if you have a separate topic based on the event having N number of partition then while consuming you will be able to create N number of consumer instances each dedicated to consume from a specific partitions concurrently. But in that case the ordering of the messages in not guaranteed.i.e. ordering of the messages is lost in the presence of parallel consumption
On the other hand keeping the records within same topic in a separate partition will make this a lot easier to implement and consumer messages in order (Kafka only provides a total order over messages within a partition, not between different partitions in a topic.). But you will be limited to run only one consumer process in that case. | 2 | 0 | 0 | In order to seperate my data, based on a key: Should I use multiple topics or multiple partitions within same topic? I'm asking on basis of overheads, computation, data storage and load caused on server. | Parallelism at Kafka Topics or Partitions Level | 0 | 0 | 0 | 2,226 |
32,585,703 | 2015-09-15T12:08:00.000 | 0 | 0 | 0 | 0 | widget,ipython-notebook,jupyter | 33,165,242 | 1 | true | 0 | 1 | Apparently, that was due to a bug in the version of Jupyter I was using. The style for .widget-subarea in widget.css has gained a flex: 2 1 0%; property since, which was missing before, despite display: flex;. | 1 | 0 | 0 | During IPython 3 times, I could make my custom IPython.html.widgets.DOMWidget span the full width of the notebook by setting it's width trait (corresponding to the CSS width of the widget div) to 100 %. In Jupyter 4, that does not work anymore, the widget gets squashed to the left edge of the notebook. Web inspector tells me that the parent div.widget-subarea essentially has zero width up to padding. Is that a new feature or a bug? What is the exact widget API to achieve a full-width widget? I guess I'm bound for trouble if my widget's JS just sets the width of the parent wrapper to 100 % as well... | IPython/Jupyter notebook widget span full width | 1.2 | 0 | 0 | 686 |
32,590,141 | 2015-09-15T15:35:00.000 | 1 | 1 | 0 | 0 | python,unit-testing,unicode,python-unittest,pybuilder | 32,715,377 | 2 | false | 0 | 0 | I seemed to have got this working by doing the following:
First I got the same error as you:
BUILD FAILED - 'unicode' object has no attribute 'write'
Then I uninstalled xmlrunner & unittest-xml-reporting using pip
Then I used pyb install_dependencies which reinstalls unittest-xml-reporting
Then my unit tests start running again when I use pyb:
There were 1 error(s) and 0 failure(s) in unit tests
This is my current pip list output:
pip (7.1.2)
PyBuilder (0.11.1)
setuptools (18.2)
six (1.9.0)
tblib (1.1.0)
unittest-xml-reporting (1.12.0)
wheel (0.24.0)
If you are using virtualenv, you can also get this error when you have pybuilder installed outside of your virtualenv environment:
For example, your virtualenv does not have pybuilder installed, but you can still run pyb from command line. It is this pybuilder that needs to be removed as well (I am on OSX so it was the default python that came with it) | 1 | 2 | 0 | I have a project being built with Pybuilder. I cloned it onto a new computer, and when I ran pyb, my unit tests complained that there was no module named xmlrunner. So after I did pip install xmlrunner, I get a build error from Pybuilder that:
'unicode' object has no attribute 'write'.
If I remove my unit tests from the unittest search path, the build completes successfully. When I run the unit tests directly, they complete successfully. So I'm thinking that somehow XMLRunner is failing. Pip installed XMLRunner version 1.7.7. Thanks in advance for your help. | XMLRunner - "unicode object has no attribute 'write'" when building | 0.099668 | 0 | 0 | 839 |
32,590,327 | 2015-09-15T15:44:00.000 | 0 | 0 | 0 | 0 | python,url,networking,request | 32,590,563 | 2 | false | 0 | 0 | It is not difficult to parse the webpage and find the links of all "attached" files such as (css, icon, js, images, etc.) which will be fetched by the browser that you can see them in the 'Network' panel.
The harder part is that some files are fetched by javascript using ajax. The only way to do that (completely and correctly) is to simulate a browser (parse html+css and run javascripts) which I don't think python can do. | 1 | 0 | 0 | So I have a question; How does one get the files from a webpage and the urls attached to them. For example, Google.com
so we go to google.com and open firebug (Mozilla/chrome) and go to the "network"
We then see the location of every file attached, and extension of the file.
How do I do this in python?
For url stuff, I usually look into urllib/mechanize/selenium but none of these seem to support what I want or I don't know the code that would be associated with it.
I'm using linux python 2.7 - Any help/answers would be awesome. Thank you for anyone attempting to answer this.
Edit: The things the back end servers generate, I don't know how but firebug in the "net" or "network" section show this information. I wondered if it could be implemented into python some how. | Get files attached to URL using python | 0 | 0 | 1 | 76 |
32,592,827 | 2015-09-15T18:09:00.000 | 0 | 0 | 1 | 0 | python,cmd,pyinstaller,linear-programming,glpk | 59,558,638 | 5 | false | 0 | 0 | If your script accesses non-Python files, move the executable so it can find them.
I was having the same problem. For me, the cause was that some of my Python files required access to non-Python files (in my case, gifs). PyInstaller didn't bundle up the resources along with the Python files and didn't stick the executable in the same directory as the main Python file, so I was getting an error when my program tried to access them. The solution was simply to copy the resources to the location where the executable was looking for them, or vise-versa.
For some reason, the error messages that helped me find the problem didn't generate when I ran PyInstaller normally. They were only generated when using the --onefile flag, and even then, they only stuck around for less than half of a second before the prompt closed. I had to use ctrl+prt to capture my screen when the messages appeared. | 3 | 8 | 0 | After 3 days, I can't get a python program packaged into a .exe file. I've tried py2exe (which continuously missed modules), and PyInstaller.
Here's the complicated part. My program uses a lot of additional installed modules (coopr, pyomo, openpyxl, glpk, cbc, pyutilib, numpy, etc.). These in turn import all kinds of other things, and I can't track it down (the PyInstaller warning log lists 676 lines of missing or potentially unneeded modules.)
However, I've gotten (by adding imports of "missing" modules to my program) a .exe version which runs from double clicking or from the command line, without printing any error.
The problem is, the program does nothing. I have an input file which is included in the build, which my program reads in, does some (intense) calculations, and then creates a .csv output file in the same directory. It works as a .py file. My .exe does nothing.
So, if you can tell me what's wrong go ahead. If not, I'd like to know any helpful steps or ideas to try. At this point, I've exhausted the feedback I can find from the program and documentation. | PyInstaller .exe file does nothing | 0 | 0 | 0 | 14,649 |
32,592,827 | 2015-09-15T18:09:00.000 | 10 | 0 | 1 | 0 | python,cmd,pyinstaller,linear-programming,glpk | 57,582,101 | 5 | false | 0 | 0 | I just solved this problem for myself.
Make sure you do not have a folder with the same name as the script you are trying to turn into an executable.
If you already have application file sample.py as well as folder sample (that contains your other .py files, say) and you want the application to retain the name sample, you can work around this problem by renaming sample.py to sample_app.py and then invoke pyinstaller with the --name option e.g., pyinstaller --onefile --name sample sample_app.py The binary created by pyinstaller will be called sample. | 3 | 8 | 0 | After 3 days, I can't get a python program packaged into a .exe file. I've tried py2exe (which continuously missed modules), and PyInstaller.
Here's the complicated part. My program uses a lot of additional installed modules (coopr, pyomo, openpyxl, glpk, cbc, pyutilib, numpy, etc.). These in turn import all kinds of other things, and I can't track it down (the PyInstaller warning log lists 676 lines of missing or potentially unneeded modules.)
However, I've gotten (by adding imports of "missing" modules to my program) a .exe version which runs from double clicking or from the command line, without printing any error.
The problem is, the program does nothing. I have an input file which is included in the build, which my program reads in, does some (intense) calculations, and then creates a .csv output file in the same directory. It works as a .py file. My .exe does nothing.
So, if you can tell me what's wrong go ahead. If not, I'd like to know any helpful steps or ideas to try. At this point, I've exhausted the feedback I can find from the program and documentation. | PyInstaller .exe file does nothing | 1 | 0 | 0 | 14,649 |
32,592,827 | 2015-09-15T18:09:00.000 | 1 | 0 | 1 | 0 | python,cmd,pyinstaller,linear-programming,glpk | 70,331,167 | 5 | false | 0 | 0 | When an exe fails to run when double-clicked in windows, the associated window also automatically closes. If this happens for you, open the command prompt and try to run the exe there by going to the appropriate path and typing in the filename (e.g. C:\github\program\dist\main.py). Any errors thrown by the application will be printed in the command prompt window, without it automatically closing. | 3 | 8 | 0 | After 3 days, I can't get a python program packaged into a .exe file. I've tried py2exe (which continuously missed modules), and PyInstaller.
Here's the complicated part. My program uses a lot of additional installed modules (coopr, pyomo, openpyxl, glpk, cbc, pyutilib, numpy, etc.). These in turn import all kinds of other things, and I can't track it down (the PyInstaller warning log lists 676 lines of missing or potentially unneeded modules.)
However, I've gotten (by adding imports of "missing" modules to my program) a .exe version which runs from double clicking or from the command line, without printing any error.
The problem is, the program does nothing. I have an input file which is included in the build, which my program reads in, does some (intense) calculations, and then creates a .csv output file in the same directory. It works as a .py file. My .exe does nothing.
So, if you can tell me what's wrong go ahead. If not, I'd like to know any helpful steps or ideas to try. At this point, I've exhausted the feedback I can find from the program and documentation. | PyInstaller .exe file does nothing | 0.039979 | 0 | 0 | 14,649 |
32,593,020 | 2015-09-15T18:22:00.000 | 5 | 0 | 1 | 0 | python,c++,windows,numpy,installation | 32,593,167 | 1 | true | 0 | 0 | Two main things you need to know:
Python packages are usually distributed as sources (though there's an ongoing effort to ship them as binary wheels instead).
Python packages sometimes include C or C++ code. That's the case for Numpy (but a lot of other packages don't).
But, when you install a package from source, and it includes C or C++ code, you need to compile that code to run it (unlike Python code, which is interpreted).
But, to compile C / C++ code, you need a C / C++ compiler. As it turns out, Visual C++ ships with a C / C++ compiler (and it's the standard for Windows).
Note that if you were using Linux instead of Windows, you'd want to install gcc (or clang) instead. | 1 | 1 | 0 | Just as the question states, I'm interested in the WHY. What exactly is happening that the numpy package cannot be installed without it? | Why is Visual C++ Installer necessary to install numpy package for Python? | 1.2 | 0 | 0 | 1,619 |
32,593,997 | 2015-09-15T19:22:00.000 | 2 | 1 | 1 | 0 | python,performance,module | 32,594,171 | 3 | false | 0 | 0 | Obviously sympy does a lot when being imported. It could be initialization of internal data structures or similar. You could call this a flaw in the design of the sympy library.
Your only choice in this case would be to avoid redoing this initialization.
I assume that you find this behavior annoying because you intend to do it often. I propose to avoid doing it often. A way to achieve this could be to create a server which is started just once, imports sympy upon its startup, and then offers a service (via interprocess communication) which allows you to do whatever you want to do with sympy.
If this could be an option for you, I could elaborate on how to do this. | 1 | 7 | 0 | I have a CLI application that requires sympy. The speed of the CLI application matters - it's used a lot in a user feedback loop.
However, simply doing import sympy takes a full second. This gets incredibly annoying in a tight feedback loop. Is there anyway to 'preload' or optimize a module when a script is run again without a change to the module? | Is there any way to speed up an import? | 0.132549 | 0 | 0 | 4,855 |
32,595,475 | 2015-09-15T20:56:00.000 | 0 | 0 | 0 | 0 | python-2.7,pygame,user-input,joystick | 32,722,976 | 1 | false | 0 | 1 | I had to calibrate my joystick by using software provided when I bought the joystick. I couldn't find a way to do so with pygame.
Are the outputs actually incorrect though? My outputs are never zero'd, but my program functions properly when using them. | 1 | 0 | 0 | I'm using pygame with a joystick controller. The joystick controller is not calibrated correctly, though. The right horizontal controller continually outputs bad values and does not zero correctly upon return to center position. Is this fully a hardware issue, or is there a method of calibration/continual calibration using pygame or another library? | Python pygame calibrate joystick controller | 0 | 0 | 0 | 378 |
32,599,677 | 2015-09-16T04:14:00.000 | 0 | 1 | 0 | 0 | php,python,telnet | 32,620,211 | 2 | true | 0 | 0 | You're probably trying to connect to a wrong port. Check with netstat -lntp which port is your http server listening on. The process will be listed as python/pid_number. | 1 | 0 | 0 | EDIT:
I want to telnet into my web server on localhost, and request my php file from command line:
I have:
1) cd'd into the directory I want to serve, namely "/www" (hello.php is here)
2) run a server at directory www: python -m SimpleHTTPServer
3) telnet localhost 80
but "connection is refused". what am I doing wrong? | Telnet connection on localhost Refused | 1.2 | 0 | 1 | 2,140 |
32,601,954 | 2015-09-16T07:11:00.000 | 0 | 0 | 0 | 0 | python,django,url | 49,653,008 | 1 | true | 1 | 0 | What I usually do is create a dir with uploaded date as name, e.g. 04042018/ and then I rename the uploaded file with an uuid4, e.g. 550e8400-e29b-41d4-a716-446655440000.jpg
So the fullpath for the uploaded file will be something like this:
site_media/media/04042018/550e8400-e29b-41d4-a716-446655440000.jpg
Personally I think is best this way than having something with site_media/media/username/file.jpg because will be easier to figure out what images belong to who. | 1 | 1 | 0 | For example, uploading a gif to gfycat generates URLs in the form of Adjective-Adjective-Noun such as ForkedTestyRabbit.
Obviously going to this URL allows you to view the model instance that was uploaded.
So I'm thinking the post upload generates a unique random URL, e.g. /uploads/PurpleSmellyGiraffe. The model will have a column that is the custom URL part, i.e. PurpleSmellyGiraffe, and then in the urls.py we have anything /uploads/* will select the corresponding model with that URL. However, I'm not sure this is the best practice.
Could I get some feedback/suggestions on how to implement this? | What is the right way in Django to generate random URLs for user uploaded items? | 1.2 | 0 | 0 | 508 |
32,604,400 | 2015-09-16T09:17:00.000 | 1 | 0 | 1 | 0 | python,function,debugging,wing-ide | 32,632,712 | 1 | true | 0 | 0 | Wing can move the program counter to a different line in the function via the right-click popup menu, but you'd need to do this every time you run the function. I think a better approach is to refactor the function into smaller functions -- then you can comment out or conditionalize only the function calls. You could also write tests that call some of the functions and not others. | 1 | 0 | 0 | I have a couple of functions written in a single Python file. They perform a sequence of steps on a file-based dataset.
My workflow:
After I finished coding a part of the function's body, I run the function to see how it goes.
It may break at a certain point.
I fix the code and re-run the function.
The problem is that when I re-run the function, it will execute the lines that were already completed successfully. Yet I want to be able to start not from the beginning but rather from an arbitrary point. This is because the whole function runs for several minutes and it would be a waste time to wait for it to complete.
I could implement "checks" to see whether this operation is required (e.g., don't create a file if it already exists), but this would imply adding a lot of new validation code (e.g., make sure that the existing file does contain the content needed); in reality, my function will be run on a dataset in known format and the whole function should be executed.
The most obvious solution is to comment out the parts that were executed successfully, but it's a hustle and I got tired of commenting and uncommenting parts as I move forwards and the function gets larger.
Are there are any better approaches than commenting out lines for ignoring certain part of the function's body when executing?
I am on Wing IDE if this has something to do with the debugging tricks in an IDE itself. | Execute function body ignoring certain lines without comments (Python)? | 1.2 | 0 | 0 | 190 |
32,611,297 | 2015-09-16T14:20:00.000 | 0 | 0 | 0 | 0 | python,django,pdf-generation,xhtml2pdf | 33,307,465 | 1 | false | 1 | 0 | When you call pisa.CreatePDF() make sure to include encoding.
Here's what we use, obviously its a bit out of context but it was mostly copied from the example documentation.
pisaStatus = pisa.CreatePDF(html.encode('UTF-8'), encoding="UTF-8", dest=f, link_callback=link_callback) | 1 | 0 | 0 | While creating PDF using xhtm2pdf we are unable to print "£" sign but works for dollar and euro. We have used setting font face and changed font family but still we are not able to print GBP. | xhtml2pdf is not displaying GBP ("£") signs while creating PDF | 0 | 0 | 0 | 319 |
32,611,932 | 2015-09-16T14:47:00.000 | 5 | 0 | 1 | 0 | python,pip,anaconda,conda | 32,615,498 | 2 | false | 0 | 0 | Odds are that anaconda automatically edited your .bashrc so that anaconda/bin is in front of your /usr/bin folder in your $PATH variable. To check this, type echo $PATH, and the command line will return a list of directory paths. Your computer checks each of these places for pip when you type pip in the command line. It executes the first one it finds in your PATH.
You can open /home/username/.bashrc with whatever text editor you choose. Wherever it adds anaconda/bin to the path, with something like export PATH=/anaconda/bin:$PATH , just replace it with export PATH=$PATH:/anaconda/bin
Note though, this will change your OS to use your system python as well. Instead of all of this, you can always just use the direct path to pip when calling it. Or you can alias it using alias pip=/path/to/system/pip. And you can put that line in your .bashrc file in order to apply it whenever you login to the pc. | 2 | 8 | 0 | I am now using anaconda pip after I installed pip by "conda install pip", if I want to use system pip again, how can I make it? Or how can I switch system pip to anaconda pip? | How can I switch using pip between system and anaconda | 0.462117 | 0 | 0 | 9,104 |
32,611,932 | 2015-09-16T14:47:00.000 | 1 | 0 | 1 | 0 | python,pip,anaconda,conda | 32,617,096 | 2 | false | 0 | 0 | You don't need to change your path. Just use the full path to the system pip (generally /usr/bin/pip or /usr/local/bin/pip) to use the system pip. | 2 | 8 | 0 | I am now using anaconda pip after I installed pip by "conda install pip", if I want to use system pip again, how can I make it? Or how can I switch system pip to anaconda pip? | How can I switch using pip between system and anaconda | 0.099668 | 0 | 0 | 9,104 |
32,615,088 | 2015-09-16T17:31:00.000 | 1 | 0 | 1 | 0 | python,django,testing,django-rest-framework | 32,615,089 | 2 | false | 1 | 0 | Given all the others paths were already covered:
Import order
Virtual Env created
Project's Interpreter using the Virtual Env
The only thing that occurred to me was do run the following command within the vurtal env:
pip install -r requirements.txt
And it worked! In the end, someone had updated the requirements which weren't being met by my current virtual env. Screwing up with the paths/imports within PyCharm. | 2 | 5 | 0 | So, I have a Django-REST Framework project and one day it simply ceased being able to run the tests within PyCharm.
From the command line I can run them both using paver or the manage.py directly.
There was a time when that would happen when we didn't import the class' superclass at the top of the file, but that's not the case.
We have a virtualenv set locally and run the server from a vagrant box. I assured the virtual environment is loaded and that the project's Interpreter is using the afore mentioned virtual env.
No clue on what's the matter. | PyCharm throws "AttributeError: 'module' object has no attribute" when running tests for no reason | 0.099668 | 0 | 0 | 7,129 |
32,615,088 | 2015-09-16T17:31:00.000 | 2 | 0 | 1 | 0 | python,django,testing,django-rest-framework | 37,061,343 | 2 | false | 1 | 0 | I had the same problem but my solution was different.
When I tried to run a test from PyCharm, the target path looked like this:
tests.apps.an_app.models.a_model.ATestCase
But since ATest was a class inside a_model.py, the targ path should actually be:
tests.apps.an_app.models.a_model:ATestCase
Changing the target in the test configuration worked. | 2 | 5 | 0 | So, I have a Django-REST Framework project and one day it simply ceased being able to run the tests within PyCharm.
From the command line I can run them both using paver or the manage.py directly.
There was a time when that would happen when we didn't import the class' superclass at the top of the file, but that's not the case.
We have a virtualenv set locally and run the server from a vagrant box. I assured the virtual environment is loaded and that the project's Interpreter is using the afore mentioned virtual env.
No clue on what's the matter. | PyCharm throws "AttributeError: 'module' object has no attribute" when running tests for no reason | 0.197375 | 0 | 0 | 7,129 |
32,616,148 | 2015-09-16T18:34:00.000 | 2 | 1 | 1 | 0 | python-2.7,pip,pycrypto | 32,632,879 | 1 | true | 0 | 0 | If anyone is having this same problem, the reason was that I had mistakenlly installed the package crypto before installing pycrypto. Once I removed both packages and reinstalled pycrypto everything worked.
I believe that it might be related to Windows treating crypto and Crypto folders as the same. | 1 | 1 | 0 | I'm using python 2.7 and tried installing the pyCrypto module using pip (pip install pycrypto) which downloaded and installed the 2.6 version, as it is needed for using twisted.
However, whenever I try to use it, I get an ImportError saying that the module Crypto doesn't exist - but I can import crypto normally.
I already tried uninstalling and installing it again, but still didn't work.
Is there any bug in the package downloaded using pip or is it anything else I'm doing wrong?
Thank you. | pyCrypto importing only "crypto", not "Crypto" (not found) | 1.2 | 0 | 0 | 1,080 |
32,616,625 | 2015-09-16T19:00:00.000 | 2 | 0 | 0 | 0 | python,database,web2py | 32,618,356 | 2 | true | 1 | 0 | Files in the /models folder are executed in alphabetical order, so just put the DAL definition at the top of the first model file that needs to use it (it will then be available globally in all subsequent model files as well as all controllers and views). | 1 | 0 | 0 | I found this line to help configure Postgresql in web2py but I can't seem to find a good place where to put it :
db = DAL("postgres://myuser:mypassword@localhost:5432/mydb")
Do I really have to write it in all db.py ? | web2py database configuration | 1.2 | 1 | 0 | 939 |
32,617,391 | 2015-09-16T19:46:00.000 | 0 | 1 | 0 | 0 | python,ftp,ftplib | 32,623,881 | 1 | false | 0 | 0 | No there's no standard way to retrieve directory listing by parts in FTP protocol.
Some FTP servers do support wildcards in the listing commands (NLST and alike). So you could get first all file starting with a, then with b, etc. But you have to test this specifically with your server, as it is a non-standard behavior. | 1 | 0 | 0 | Am new to Python and using FTPLib for some reason.
My aim is, am having a server where files with .txt will be stored by different clients very frequently. With nlst() function I could get the files present in the FTP server. But it returns all the files. Since the server has hell lot of files the response time is slow.
Is there any way to get the first twenty elements from the FTP using some function and then next twenty? This way I could improve the response time from FTP server considerably.
Regards | Python ftplib: Getting number of files from FTP | 0 | 0 | 0 | 467 |
32,618,466 | 2015-09-16T20:53:00.000 | 0 | 1 | 0 | 0 | python,twitter,emoji | 32,618,578 | 2 | false | 0 | 0 | OK got it.
msg += u'\U0001f468' gives me an old (white) man
msg += u'\U0001f468\U0001f3ff' gives me an old (afro-caribbean) man. | 1 | 1 | 0 | I have a script done which uses the twitter lib to send a twitter DM.
I've tried a number of ways to include codes which render on iOS 8+ as emoji without luck. Google has been unkind.
Examples:
msg += u'\xF0\x9F\x9A\x80' gives me no rocket. I get a d with a line through the top.
msg += u'U+1F684' gives me the code not a train
As I can include emoji when I send a twitter DM to a user, the server clearly handles meta data pertaining to emoji. As emoji is a UTF-8 character set rather than a font, I'm surprised that in the first example I'm getting representation in the font the twitter DM arrives in.
How can I send such characters from python 2? | How to send emoji with Twitter Python lib | 0 | 0 | 0 | 1,192 |
32,618,686 | 2015-09-16T21:04:00.000 | 0 | 0 | 1 | 0 | pip,python-3.4,centos7 | 49,189,089 | 10 | false | 0 | 0 | There is a easy way of doing this by just using easy_install (A Setuptools to package python librarie).
Assumption.
Before doing this check whether you have python installed into your Centos machine (at least 2.x).
Steps to install pip.
So lets do install easy_install,
sudo yum install python-setuptools python-setuptools-devel
Now lets do pip with easy_install,
sudo easy_install pip
That's Great. Now you have pip :) | 1 | 112 | 0 | CentOS 7 EPEL now includes Python 3.4: yum install python34
However, when I try that, even though Python 3.4 installs successfully, it doesn't appear to install pip. Which is weird, because pip should be included by default with Python 3.4. which pip3 doesn't find anything, nor does which pip.
How do I access pip from the Python 3.4 package in CentOS 7 EPEL release? | How to install pip in CentOS 7? | 0 | 0 | 0 | 190,246 |
32,620,254 | 2015-09-16T23:15:00.000 | 0 | 0 | 0 | 0 | python | 37,981,930 | 2 | false | 0 | 0 | I'm thinking the same as remram in the comments: parse takes a file location or a file object and preserves that information so that it can provide additional utility, which is really helpful. If parse did not return an ET object, then you would have to keep better track of the sources and whatnot in order to manually feed them back into the helper functions that ET objects have by default. In contrast to files, Strings- by definition- do not have the same kind of information attached from them, so you can't create the same utilities for them (otherwise there very well may be an ET.parsefromstring() method which would return an ET Object).
I suspect this is also the logic behind the method being named parse instead of ET.fromfile(): I would expect the same object type to be returned from fromfile and fromstring, but can't say I would expect the same from parse (it's been a long time since I started using ET, so there's no way to verify that, but that's my feeling).
On the subject Remram raised of placing utility methods on Elements, as I understand the documentation, Elements are extremely uniformed when it comes to implementation. People talk about "Root Elements," but the Element at the root of the tree is literally identical to all other Elements in terms of its class Attributes and Methods. As far as I know, Elements don't even know who their parent is, which is likely to support this uniformity. Otherwise there might be more code to implement the "root" Element (which doesn't have a parent) or to re-parent subelements. It seems to me that the simplicity of the Element class works greatly in its favor. So it seems better to me to leave Elements largely agnostic of anything above them (their parent, the file they come from) so there can't be any snags concerning 4 Elements with different output files in the same tree (or the like).
When it comes to implementing the module inside of code, it seems to me that the script would have to recognize the input as a file at some point, one way or another (otherwise it would be trying to pass the file to fromstring). So there shouldn't arise a situation in which the output of parse should be unexpected such that the ElementTree is assumed to be an Element and processed as such (unless, of course, parse was implemented without the programmer checking to see what parse did, which just seems like a poor habit to me). | 1 | 9 | 0 | I'm a bit confused by some of the design decisions in the Python ElementTree API - they seem kind of arbitrary, so I'd like some clarification to see if these decisions have some logic behind them, or if they're just more or less ad hoc.
So, generally there are two ways you might want to generate an ElementTree - one is via some kind of source stream, like a file, or other I/O stream. This is achieved via the parse() function, or the ElementTree.parse() class method.
Another way is to load the XML directly from a string object. This can be done via the fromstring() function.
Okay, great. Now, I would think these functions would basically be identical in terms of what they return - the difference between the two of them is basically the source of input (one takes a file or stream object, the other takes a plain string.) Except for some reason the parse() function returns an ElementTree object, but the fromstring() function returns an Element object. The difference is basically that the Element object is the root element of an XML tree, whereas the ElementTree object is sort of a "wrapper" around the root element, which provides some extra features. You can always get the root element from an ElementTree object by calling getroot().
Still, I'm confused why we have this distinction. Why does fromstring() return a root element directly, but parse() returns an ElementTree object? Is there some logic behind this distinction? | Python ElementTree: ElementTree vs root Element | 0 | 0 | 1 | 1,617 |
32,620,930 | 2015-09-17T00:45:00.000 | 1 | 0 | 0 | 0 | django,python-2.7,django-1.7,django-migrations,django-1.8 | 32,637,043 | 6 | true | 1 | 0 | Well, I found the issue. I have auditlog installed as one my apps. I removed it and migrate works fine. | 4 | 6 | 0 | I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error:
django.db.utils.ProgrammingError: relation "django_content_type" does not exist
I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to do something for upgrading my django?
EDIT:
I downgraded back to 1.7.1 and it works. Is there a way to fix it for 1.8.4? | Django 1.8 migrate: django_content_type does not exist | 1.2 | 1 | 0 | 10,126 |
32,620,930 | 2015-09-17T00:45:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,django-1.7,django-migrations,django-1.8 | 67,501,508 | 6 | false | 1 | 0 | i had drop the database and rebuild it, then in the PyCharm Terminal py manage.py makemigrations and py manage.py migrate fix this problem. I think the reason is the table django_content_type is the django's table, if it misssed can not migrate, so have to drop the database and rebuild. | 4 | 6 | 0 | I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error:
django.db.utils.ProgrammingError: relation "django_content_type" does not exist
I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to do something for upgrading my django?
EDIT:
I downgraded back to 1.7.1 and it works. Is there a way to fix it for 1.8.4? | Django 1.8 migrate: django_content_type does not exist | 0 | 1 | 0 | 10,126 |
32,620,930 | 2015-09-17T00:45:00.000 | 6 | 0 | 0 | 0 | django,python-2.7,django-1.7,django-migrations,django-1.8 | 32,623,157 | 6 | false | 1 | 0 | Delete all the migration folder from your app and delete the database then migrate your database......
if this does not work delete django_migration table from database and add the "name" column in django_content_type table ALTER TABLE django_content_type ADD COLUMN name character varying(50) NOT NULL DEFAULT 'anyName'; and then run $ python manage.py migrate --fake-initial | 4 | 6 | 0 | I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error:
django.db.utils.ProgrammingError: relation "django_content_type" does not exist
I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to do something for upgrading my django?
EDIT:
I downgraded back to 1.7.1 and it works. Is there a way to fix it for 1.8.4? | Django 1.8 migrate: django_content_type does not exist | 1 | 1 | 0 | 10,126 |
32,620,930 | 2015-09-17T00:45:00.000 | 2 | 0 | 0 | 0 | django,python-2.7,django-1.7,django-migrations,django-1.8 | 37,074,120 | 6 | false | 1 | 0 | Here's what I found/did. I am using django 1.8.13 and python 2.7. The problem did not occur for Sqlite. It did occur for PostgreSQL.
I have an app the uses a GenericForeignKey (which relies on Contenttypes). I have another app that has a model that is linked to the first app via the GenericForeignKey. If I run makemigrations for both these apps, then migrate works. | 4 | 6 | 0 | I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error:
django.db.utils.ProgrammingError: relation "django_content_type" does not exist
I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to do something for upgrading my django?
EDIT:
I downgraded back to 1.7.1 and it works. Is there a way to fix it for 1.8.4? | Django 1.8 migrate: django_content_type does not exist | 0.066568 | 1 | 0 | 10,126 |
32,625,216 | 2015-09-17T07:56:00.000 | 4 | 0 | 1 | 0 | python,azure,azure-storage,azure-queues | 32,636,789 | 2 | true | 0 | 0 | If you're using 0.30+ all errors that occur after the request to the service has been will extend from AzureException. AzureException can be found in the azure.common package which Azure storage takes a dependency on. Errors which are thrown if invalid args are passed to a method (ex None for the queue name) might not extend from this and will be standard Python exception like ValueError. | 1 | 2 | 0 | I am trying to implement a job which reads from Azure Queue and writes into db. occasionally some errors are raised from the Azure server such as timeout, server busy etc. How to handle such errors in the code, I tried ti run the code in a try catch loop but, I am not able to identify Azure errors?
I triedn to import WindowsAzureError from azure , but it doesn't work (no such module to import)?
Which is a good way to handle errors in this case? | How to handle exceptions in Python Azure Storage SDK | 1.2 | 0 | 0 | 2,865 |
32,629,367 | 2015-09-17T11:35:00.000 | 1 | 0 | 1 | 1 | python-3.x,ubuntu-14.04,python-idle | 32,637,094 | 1 | true | 0 | 0 | I have not used the Ubuntu terminal, but I will assume that it is a typical terminal program. If you type python3, it starts python3, which prints, in the same window, something like Python 3.4.3 ... and then a prompt >>>. You interact with python3 via the terminal program.
If you type idle3, it runs a python gui program (Idle) with python3. That program prints, in a separate window, something like Python 3.4.3 ... and then a prompt >>>. You interact with python3 via this python program. In either case, any code you enter is executed by python3. For nearly all code you might enter, such as anything in the tutorial, the printed response will be the same.
The difference in terms of interaction is that in the terminal, if it is typical, you enter and recall (with up arrow?) lines of code, whereas in Idle, you enter and recall (with Alt-p) statements, which may comprise multiple lines. Also, Idle syntax colors your code, whereas your terminal may not.
A bigger difference is that Idle is not just a Python terminal or shell, but is an integrated development environment that includes an editor that works with the shell. You can run code from the editor with F5. If there is an error traceback in the shell, you can right click on an error line and go to the line with the error. | 1 | 0 | 0 | Is there a difference using IDLE3 or the Ubuntu 14.04 terminal for Python3 interpretation? In that case, what are the differences? | Is there a difference using Python3 in IDLE3 or in Ubuntu 14.04 terminal? | 1.2 | 0 | 0 | 247 |
32,629,369 | 2015-09-17T11:35:00.000 | 2 | 0 | 0 | 0 | python,django,models | 32,631,283 | 1 | true | 1 | 0 | No, there is no way of doing that because there is no queryset between class A and class B, like it was with ManyToMany or ForeignKey. You must create B directly and assign A to proper field on B. | 1 | 2 | 0 | Django offers Related objects for oneToMany or manyToMany relationship.
Using this object, It can create a record from the reverse direction.
(for example with XXXX_set.create(.....) or XXXX_set.get_or_create(.....))
I want to use this kind of function with an oneToOne relationship.
Is there any way to create an one to one relationship record from the reverse direction ?
for example
If class A(models.Models) and class B(models.Model) are tied by one to one relationship and I create A and save() it, than I want to create B also through A | How to create a record from one to one relationship reverse direction in Django? | 1.2 | 0 | 0 | 325 |
32,637,978 | 2015-09-17T18:55:00.000 | 2 | 0 | 1 | 0 | java,python,c,variables | 32,638,481 | 2 | true | 0 | 0 | The type of value a C variable can hold is a fixed characteristic of that variable. More generally, every expression in C has a data type that can be determined based on the containing program's source code alone. C has mechanisms to convert values from one type to another, but that's a different beast. (Also, "converting" a value should be understood as an operation that accepts an input value and produces a different output value, as opposed to modifying a single value in-place. The same distinction applies to Python, too.)
C also supports data types ("unions") that provide overlapping storage for member values of different types. Variables having such types can be considered able to contain values of different types at different times. The particular types they can contain are specified in advance and cannot change, however.
In Python, on the other hand, data type is not a characteristic of variables, but it is still a characteristic of values. Each Python value has a specific type that never changes, but you can replace the value of a variable with a value of a different type. | 2 | 1 | 0 | I recently red a book about Python that went into a little detail on Python's ability to re-assign variable types (int, str, float, etc.) on the fly which is one of Python's strengths (as far as I am aware).
However, this is not the case in other programming languages, such as C and Java. From what I know, you must first declare the type of variable that the variable is, such as int or str and then assign its value.
Is there a way to re-assign that variable's type, say if you were to delete its value first? | Re-assigning variable types/values in "strong" code languages | 1.2 | 0 | 0 | 67 |
32,637,978 | 2015-09-17T18:55:00.000 | 1 | 0 | 1 | 0 | java,python,c,variables | 32,638,044 | 2 | false | 0 | 0 | C and Java are both strongly typed languages. Once the type is defined, it doesn't change. As a result the compiler is able to perform type checking in all expressions to ensure that the proper type is in use for the context it is in.
There are instances where you can use a type cast to switch between one type and another, but those are exceptions rather than the rule. | 2 | 1 | 0 | I recently red a book about Python that went into a little detail on Python's ability to re-assign variable types (int, str, float, etc.) on the fly which is one of Python's strengths (as far as I am aware).
However, this is not the case in other programming languages, such as C and Java. From what I know, you must first declare the type of variable that the variable is, such as int or str and then assign its value.
Is there a way to re-assign that variable's type, say if you were to delete its value first? | Re-assigning variable types/values in "strong" code languages | 0.099668 | 0 | 0 | 67 |
32,639,074 | 2015-09-17T20:07:00.000 | 4 | 0 | 1 | 0 | python,windows-7,pip | 48,755,132 | 16 | false | 0 | 0 | The ensurepip module was added in version 3.4 and then backported to 2.7.9.
So make sure your Python version is at least 2.7.9 if using Python 2, and at least 3.4 if using Python 3. | 5 | 108 | 0 | I have installed pip and ez setup. I also checked the system path and I can see the module in the folder structure. Still when i try to run pip command, I get an Import error saying no module named pip. I am running 32bit python on a windows7 machine | Why am I getting ImportError: No module named pip ' right after installing pip? | 0.049958 | 0 | 0 | 247,979 |
32,639,074 | 2015-09-17T20:07:00.000 | 4 | 0 | 1 | 0 | python,windows-7,pip | 68,576,280 | 16 | false | 0 | 0 | I found this post while looking for a solution for the same problem. I was using an embedded python distribution. In this case, the solution is to uncomment import site in the file python<version>._pth. | 5 | 108 | 0 | I have installed pip and ez setup. I also checked the system path and I can see the module in the folder structure. Still when i try to run pip command, I get an Import error saying no module named pip. I am running 32bit python on a windows7 machine | Why am I getting ImportError: No module named pip ' right after installing pip? | 0.049958 | 0 | 0 | 247,979 |
32,639,074 | 2015-09-17T20:07:00.000 | 0 | 0 | 1 | 0 | python,windows-7,pip | 52,826,969 | 16 | false | 0 | 0 | I've solved this error downloading the executable file for python 3.7.
I've had downloaded the embeddeable version and got that error.
Now it works! :D | 5 | 108 | 0 | I have installed pip and ez setup. I also checked the system path and I can see the module in the folder structure. Still when i try to run pip command, I get an Import error saying no module named pip. I am running 32bit python on a windows7 machine | Why am I getting ImportError: No module named pip ' right after installing pip? | 0 | 0 | 0 | 247,979 |
32,639,074 | 2015-09-17T20:07:00.000 | 5 | 0 | 1 | 0 | python,windows-7,pip | 64,071,289 | 16 | false | 0 | 0 | Running these 2 commands helped me:
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py | 5 | 108 | 0 | I have installed pip and ez setup. I also checked the system path and I can see the module in the folder structure. Still when i try to run pip command, I get an Import error saying no module named pip. I am running 32bit python on a windows7 machine | Why am I getting ImportError: No module named pip ' right after installing pip? | 0.062419 | 0 | 0 | 247,979 |
32,639,074 | 2015-09-17T20:07:00.000 | 303 | 0 | 1 | 0 | python,windows-7,pip | 42,531,427 | 16 | true | 0 | 0 | Just be sure that you have include python to windows PATH variable, then run python -m ensurepip | 5 | 108 | 0 | I have installed pip and ez setup. I also checked the system path and I can see the module in the folder structure. Still when i try to run pip command, I get an Import error saying no module named pip. I am running 32bit python on a windows7 machine | Why am I getting ImportError: No module named pip ' right after installing pip? | 1.2 | 0 | 0 | 247,979 |
32,639,137 | 2015-09-17T20:10:00.000 | 1 | 0 | 0 | 0 | python-2.7,kivy | 32,649,566 | 1 | false | 0 | 1 | Do exactly what you're already doing, but have the function you're calling check if whatever condition you're after is met. For instance, if you want to call it after 5 characters have been entered, check the length of the string.
This seems an awkward way of working though, are you sure you want to check after every character rather than, say, wait for the user to press enter? | 1 | 0 | 0 | I am trying to capture text from a Text Input box and fire off a function after the text is inputted. Only problem is the event is fired everytime a character is entered. I would like the event fired after all characters have been entered in the text input box. How could i delay the firing until all characters have been entered?
I tried to override the on_text method but did not solve my problem as it is this method that is being called 20 times. I also tried putting in a sleep in the on_text but it just buffered the responses and still fired 20 times. | How do I delay an event in Kivy? | 0.197375 | 0 | 0 | 133 |
32,639,706 | 2015-09-17T20:48:00.000 | 5 | 0 | 0 | 0 | python,ruby-on-rails,ruby,django,ruby-on-rails-3 | 32,639,897 | 1 | true | 1 | 0 | The db/schema.rb file keeps track of the current state, and you can delete your migrations at any point and use the rake db:schema:load task to load the db/schema.rb into your DB. | 1 | 4 | 0 | I was developing in both RoR and Django based projects, and I don't like the way how RoR deals with migrations. For example, if I make huge changes to my models over 2 years, in Django I can delete all migrations and make new, single file, basing on actual state of my models. In RoR I will have, like 50 files, where some of them may be absolutely redundant (correct me if I'm wrong).
I would like to have RoR app, that would create migration basing on models, like in Django (so I assume models would need some information about fields).
Is there any gem/framework to RoR, that would add such feature? | Use migrations like in Django in Rails models | 1.2 | 0 | 0 | 433 |
32,640,584 | 2015-09-17T21:55:00.000 | 2 | 0 | 1 | 0 | python,ipython,spyder,jupyter | 33,413,421 | 1 | true | 0 | 0 | If you are running on a linux environment, try to start spyder from the command line, once you have the PATH pointing to the anaconda 'bin' folder. The spyder executable there should be upgraded. | 1 | 2 | 0 | I recently installed ipython 4.0 to use JUPYTER NOTEBOOK. I used SPYDER as IDE for development in Python, but now SPYDER doesn't work.
Why spyder does not support ipython 4.0? | SPYDER and IPython 4.0 support | 1.2 | 0 | 0 | 284 |
32,640,830 | 2015-09-17T22:16:00.000 | 0 | 0 | 1 | 0 | python,numpy,multidimensional-array,scipy,subclass | 32,641,163 | 2 | false | 0 | 0 | No. Numpy's asarray() is coded to instantiate a regular numpy array, and you can't change that without editing asarray() or changing the caller's code to call your special method instead of asarray() | 1 | 0 | 1 | Basically I have a class which subclasses ndarray and has additional information. When I call np.asarray() on my object, it returns just the numpy array and destroys my additional information.
My question is then this: Is there a way in Python to change how np.asarray() acts on my subclass of ndarray from within my subclass? I don't want to change numpy of course, and I do not want to go through every instance where np.asarray() is called to take care of this.
Thanks in advance!
Chris | In Python, can you change how a method from class 1 acts on class 2 from within class 2? | 0 | 0 | 0 | 54 |
32,652,611 | 2015-09-18T12:55:00.000 | 0 | 0 | 1 | 0 | python,pdf,matplotlib | 32,664,477 | 3 | false | 0 | 0 | The PDF backend makes one page per figure. Use subplots to get multiple plots into one figure and they'll all show up together on one page of the PDF. | 1 | 1 | 0 | I'm trying to get my figures in just one pdf page, but I don't know how to do this. I found out that it's possible to save multiple figures in a pdf file with 'matplotlib.backends.backend_pdf', but it doesn't work for just one page.
Has anyone any ideas ? Convert the figures to just one figure ? | Save multiple figures in one pdf page, matplotlib | 0 | 0 | 0 | 8,551 |
32,654,888 | 2015-09-18T14:48:00.000 | 1 | 0 | 1 | 0 | python,string,list | 32,655,059 | 6 | false | 0 | 0 | .upper() returns a new string, which you assign to test1. The string you operated on ("hello") is not modified. Indeed, it couldn't be since strings are immutable in Python.
.reverse() modifies in-place. That means the object ["hello", "world"] got modified. Unfortunately, you don't have a variable pointing to that object, so you can't see that.
This is a convention in Python. Functions that modify in-place return None, whereas functions that create a modified copy return that copy. | 2 | 1 | 0 | I have a quick question about an issue I discovered. In the Python shell I can write test1 = "hello".upper() and when I type test1 I get "HELLO", as expected. However, if I do something similar with a list, such as test2 = ["hello", "world"].reverse(), and I try to return test2, I get nothing; it is a "NoneType" with no value assigned to it. Why does this happen? Why can I make an assignment with a method acting on a string but I can't make an assignment when there is a method acting on a list? | Using methods for lists vs. using methods for strings in Python | 0.033321 | 0 | 0 | 255 |
32,654,888 | 2015-09-18T14:48:00.000 | 2 | 0 | 1 | 0 | python,string,list | 32,655,630 | 6 | false | 0 | 0 | The reason that you can sometimes do one thing with one object and not do something different with a different object is that different objects and different methods are different. The documentation for each method says what it does. You need to look at the documentation for the individual method you are using to see what it does. There is no reason to assume that someString.upper() will work like someList.reverse(), because strings are not lists, and upper is not reverse. | 2 | 1 | 0 | I have a quick question about an issue I discovered. In the Python shell I can write test1 = "hello".upper() and when I type test1 I get "HELLO", as expected. However, if I do something similar with a list, such as test2 = ["hello", "world"].reverse(), and I try to return test2, I get nothing; it is a "NoneType" with no value assigned to it. Why does this happen? Why can I make an assignment with a method acting on a string but I can't make an assignment when there is a method acting on a list? | Using methods for lists vs. using methods for strings in Python | 0.066568 | 0 | 0 | 255 |
32,659,008 | 2015-09-18T18:45:00.000 | 1 | 0 | 0 | 0 | python,sql-server,database-design,triggers,database-cloning | 32,659,567 | 1 | true | 0 | 0 | lad2015 answered the first part. The second part can be infinitely more dangerous as it involves calling outside the Sql Server process.
In the bad old days one would use the xp_cmdshell. These days it may be more worthwhile to create an Unsafe CLR stored procedure that'll call the python script.
But it is very dangerous and I cannot stress just how much you shouldn't do it unless you really have no other choices.
I'd prefer to see a polling python script that runs 100% external to Sql that connects to a Status table populated by the trigger and performs work accordingly. | 1 | 0 | 0 | I've a read-only access to a database server dbserver1. I need to store the result set generated from my query running on dbserver1 into another server of mine dbserver2. How should I go about doing that?
Also can I setup a trigger which will automatically copy new entries that will come in to the dbserver1 to dbserver2? Source and destination are both using Microsoft SQL server.
Following on that, I need to call a python script on a database trigger event. Any ideas on how could that be accomplished? | Copy database from one server to another server + trigger python script on a database event | 1.2 | 1 | 0 | 449 |
32,659,032 | 2015-09-18T18:47:00.000 | 1 | 0 | 1 | 0 | python,web-services,multiprocessing | 32,660,292 | 1 | false | 0 | 0 | Web2py's scheduler may be a solution for up to about 50 concurrent processes. For more processes, state-of-the-art async processing (e.g., Celery) should be necessary. | 1 | 1 | 0 | I have a long-running python script that needs to be executed by an arbitrary number of users for an arbitrary number of files. Multiprocessing and scalability are essential.
Based on a post regarding the multiprocessing module, it appears that a user reloading or resubmitting may be a problem as well as the server dropping tasks that run too long.
Is a desktop interface the best option, or are there any examples of a similar web application? | Are there any web applications with user-driven multiprocessing tasks? | 0.197375 | 0 | 0 | 48 |
32,660,088 | 2015-09-18T19:58:00.000 | 1 | 1 | 0 | 1 | php,python,apache,permission-denied | 32,660,141 | 2 | false | 0 | 0 | You need read permission to run the python script. | 1 | 2 | 0 | I have a PHP script that is supposed to execute a python script as user "apache" but is returning the error:
/transform/anaconda/bin/python: can't open file '/transform/python_code/edit_doc_with_new_demo_info.py': [Errno 13] Permission denied
Permissions for edit_doc_with_new_demo_info.py are ---xrwx--x. 1 apache PosixUsers 4077 Sep 18 12:14 edit_doc_with_new_demo_info.py. The line that is calling this python script is:
shell_exec('/transform/anaconda/bin/python /transform/python_code/edit_doc_with_new_demo_info.py ' . escapeshellarg($json_python_data) .' >> /transform/edit_subject_python.log 2>&1')
If apache is the owner of the python file and the owner has execute permission how can it be unable to open the file? | Python can't open file | 0.099668 | 0 | 0 | 4,074 |
32,660,114 | 2015-09-18T20:00:00.000 | 28 | 0 | 1 | 0 | python,windows,opencv,installation,pycharm | 32,664,411 | 3 | true | 0 | 0 | I finally figured out on how to solve this issue:
Steps to follow:
Install Python 2.7.10
Install Pycharm(If you have not done it already)
Download and install the OpenCV executable.
Add OpenCV in the system path(%OPENCV_DIR% = /path/of/opencv/directory)
Goto C:\opencv\build\python\2.7\x86 folder and copy cv2.pyd file.
Goto C:\Python27\DLLs directory and paste the cv2.pyd file.
Goto C:\Python27\Lib\site-packages directory and paste the cv2.pyd file.
Goto PyCharm IDE and goto DefaultSettings>PythonInterpreter.
Select the Python which you have installed on Step1.
Install the packages numpy,matplotlib and pip in pycharm.
Restart your PyCharm.
PyCharm now has OpenCV library installed and working. | 1 | 8 | 1 | I am trying to install and use OpenCV library for python development. I want to use it for PyCharm IDE. I am trying to do it without the package manager.
The environment is a windows 64 bit architecture. For Python I am using Python 2.7.10.
I have already included the OpenCV directory in the system path.
I am using python 2.7.10 interpreter for PyCharm and have installed the pip and numpy packages.
opencv version is 3.0.0
How do I enable OpenCV and make it working in PyCharm? | How to install OpenCV on Windows and enable it for PyCharm without using the package manager | 1.2 | 0 | 0 | 66,287 |
32,660,496 | 2015-09-18T20:27:00.000 | 0 | 0 | 0 | 0 | python,mysql,django | 32,660,678 | 2 | false | 1 | 0 | You could create a model UserItems for each user with a ForeignKey pointing to the user and an item ID pointing to items. The UserItems model should store the unique item IDs of the items that belong to a user. This should scale better if items can be attached to multiple users or if items can exist that aren't attached to any user yet. | 1 | 2 | 0 | Good evening,
I am working on some little website for fun and want users to be able to add items to their accounts. What I am struggling with is coming up with a proper solution how to implement this properly.
I thought about adding the User Object itself to the item's model via ForeignKey but wouldn't it be necessary to filter through all entries in the end to find the elements attached to x user? While this would work, it seems quite inefficient, especially when the database has grown to some point. What would be a better solution? | Add models to specific user (Django) | 0 | 0 | 0 | 1,112 |
32,671,518 | 2015-09-19T17:59:00.000 | -1 | 1 | 1 | 0 | python,python-3.x | 32,671,674 | 3 | false | 0 | 0 | Modules are found in /usr/lib/python3/dist-packages in unix-like systems
and are found in C:\Python34\Lib in windows | 1 | 1 | 0 | I have to analyse some python file. For this I should analyse all modules (i.e. get source code of this modules if they written on Python) imported in given file.
How I can get paths to files with imported python-modules?
I try to use sys.path, but it gives all paths, where does python interpreter may search modules | Getting paths to code of imported modules | -0.066568 | 0 | 0 | 55 |
32,673,359 | 2015-09-19T21:20:00.000 | 2 | 0 | 1 | 0 | python,opencv | 71,442,355 | 7 | false | 0 | 0 | It seems the lastest version opencv-python have fixed this issue, I upgrade opencv-python from 4.4.0.44 to 4.5.5.64 by pip install --upgrade opencv-python, then this error disappear. | 2 | 35 | 0 | What causes the SystemError in this line of code
cv2.line(output, point1, point2, (0,0,255), 5)? | SystemError: new style getargs format but argument is not a tuple? | 0.057081 | 0 | 0 | 62,047 |
32,673,359 | 2015-09-19T21:20:00.000 | 0 | 0 | 1 | 0 | python,opencv | 71,422,867 | 7 | false | 0 | 0 | just write this following way
v2.line(output, tuple(point1), tuple(point2), (0,0,255), 5)? | 2 | 35 | 0 | What causes the SystemError in this line of code
cv2.line(output, point1, point2, (0,0,255), 5)? | SystemError: new style getargs format but argument is not a tuple? | 0 | 0 | 0 | 62,047 |
32,675,024 | 2015-09-20T02:07:00.000 | 1 | 0 | 1 | 0 | python,scikit-learn,python-import,anaconda | 65,357,723 | 8 | false | 0 | 0 | SOLVED:
reinstalled Python 3.7.9 (not the latet)
installed numpy 1.17.5 (not the latest)
installed scikit-learn (latest)
sklearn works now! | 4 | 24 | 1 | Beginner here.
I’m trying to use sklearn in pycharm. When importing sklearn I get an error that reads “Import error: No module named sklearn”
The project interpreter in pycharm is set to 2.7.10 (/anaconda/bin/python.app), which should be the right one.
Under default preferenes, project interpreter, I see all of anacondas packages. I've double clicked and installed the packages scikit learn and sklearn. I still receive the “Import error: No module named sklearn”
Does anyone know how to solve this problem? | Getting PyCharm to import sklearn | 0.024995 | 0 | 0 | 67,839 |
32,675,024 | 2015-09-20T02:07:00.000 | 0 | 0 | 1 | 0 | python,scikit-learn,python-import,anaconda | 53,357,368 | 8 | false | 0 | 0 | For Mac OS:
PyCharm --> Preferences --> Project Interpreter --> Double Click on pip (a new window will open with search option) --> mention 'Scikit-learn' on the search bar --> Install Packages --> Once installed, close that new window --> OK on the existing window
and you are done. | 4 | 24 | 1 | Beginner here.
I’m trying to use sklearn in pycharm. When importing sklearn I get an error that reads “Import error: No module named sklearn”
The project interpreter in pycharm is set to 2.7.10 (/anaconda/bin/python.app), which should be the right one.
Under default preferenes, project interpreter, I see all of anacondas packages. I've double clicked and installed the packages scikit learn and sklearn. I still receive the “Import error: No module named sklearn”
Does anyone know how to solve this problem? | Getting PyCharm to import sklearn | 0 | 0 | 0 | 67,839 |
32,675,024 | 2015-09-20T02:07:00.000 | 8 | 0 | 1 | 0 | python,scikit-learn,python-import,anaconda | 54,702,828 | 8 | false | 0 | 0 | please notice that, in the packages search 'Scikit-learn', instead 'sklearn' | 4 | 24 | 1 | Beginner here.
I’m trying to use sklearn in pycharm. When importing sklearn I get an error that reads “Import error: No module named sklearn”
The project interpreter in pycharm is set to 2.7.10 (/anaconda/bin/python.app), which should be the right one.
Under default preferenes, project interpreter, I see all of anacondas packages. I've double clicked and installed the packages scikit learn and sklearn. I still receive the “Import error: No module named sklearn”
Does anyone know how to solve this problem? | Getting PyCharm to import sklearn | 1 | 0 | 0 | 67,839 |
32,675,024 | 2015-09-20T02:07:00.000 | 1 | 0 | 1 | 0 | python,scikit-learn,python-import,anaconda | 50,808,255 | 8 | false | 0 | 0 | Same error occurs to me i have fixed by selecting File Menu-> Default Settings-> Project Interpreter -> Press + button and type 'sklearn' Press install button. Installation will be done in 10 to 20 seconds.
If issue not resolved please check you PyCharm Interpreter path. Sometimes your machine have Python 2.7 and Python 3.6 both installed and there may be some conflict by choosing one. | 4 | 24 | 1 | Beginner here.
I’m trying to use sklearn in pycharm. When importing sklearn I get an error that reads “Import error: No module named sklearn”
The project interpreter in pycharm is set to 2.7.10 (/anaconda/bin/python.app), which should be the right one.
Under default preferenes, project interpreter, I see all of anacondas packages. I've double clicked and installed the packages scikit learn and sklearn. I still receive the “Import error: No module named sklearn”
Does anyone know how to solve this problem? | Getting PyCharm to import sklearn | 0.024995 | 0 | 0 | 67,839 |
32,675,202 | 2015-09-20T02:39:00.000 | 0 | 0 | 0 | 1 | python-2.7,python-3.4,scandir | 32,675,226 | 1 | true | 0 | 0 | You want scandir(), which has been added to the standard library for 3.5. It is available for 2.7 and 3.4 from the Python Package index. (You should be able to use pip or easyinstall to retrieve it.) | 1 | 1 | 0 | I have to process a large number of wide directory trees that are only a few levels tall and with the leaf (and only the leaf) directories containing thousands of files (over NFS). When I use os.walk() there seems to be a very long delay at the leaf nodes as os.walk() is generating a list of all files in the directory. Is there a solution that will give me one file at a time (as it walks the filesystem) instead of pre-generating the entire list?
I'm interested in both Python 2.7 and python 3.4 solutions | efficient directory tree walking in Python | 1.2 | 0 | 0 | 242 |
32,675,742 | 2015-09-20T04:14:00.000 | 0 | 0 | 1 | 0 | python,ipython-notebook | 32,675,775 | 2 | false | 0 | 0 | %alias cat cat, this makes %cat just like typing !cat. Really only useful if you have standard flags or arguments or more complex commands, e.g. one of the standard aliases: list files with execute bit set alias lx ls -F -l -G %l | grep ^-..x | 1 | 0 | 0 | I just started using IPython and learning from the tutorials. I have created a file but when I use the %cat command I get ERROR: Line magic function %cat not found. When I type %alias it does not list 'cat' as one of the commands. How can I add it to the list? | How do I add %cat to IPython? | 0 | 0 | 0 | 2,212 |
32,678,690 | 2015-09-20T11:09:00.000 | 0 | 0 | 1 | 0 | python,installation,pip | 32,678,818 | 3 | false | 0 | 0 | You can go to your python 3.4 directory scripts and run it's pip in:
../python3.4/scripts | 1 | 0 | 0 | How to install pip for python3.4 when my pi have python3.2 and python3.4
when I used sudo install python3-pip
it's only for python3.2
but I want install pip for python3.4 | How to install pip for python3.4 when my pi have python3.2 and python3.4 | 0 | 0 | 0 | 776 |
32,680,081 | 2015-09-20T13:45:00.000 | 2 | 0 | 1 | 0 | python,pip | 32,680,295 | 2 | false | 0 | 0 | A couple more points:
Check to see if you're installing the library into the virtualenv that you want to use.
There are some libraries whose package names are different from the library's name. You could take a look at their documentation online (google with keyword python <library> would usually bring up the information) to see if you're importing the package correctly. | 1 | 38 | 0 | I have successfully installed a library with pip install <library-name>. But when I try to import it, python raises ImportError: No module named <library-name>. Why do I get this error and how can I use the installed library? | ImportError after successful pip installation | 0.197375 | 0 | 0 | 67,829 |
32,680,534 | 2015-09-20T14:35:00.000 | 2 | 0 | 0 | 0 | javascript,python,browser,beautifulsoup,python-requests | 32,680,624 | 1 | false | 1 | 0 | In most cases, it is enougth to analyze the "network" tab of the developer tools and see the requests that are fired when you hit that button you metioned.
As you understand those requests, you will be able to implement your scraper to run similar requests and grab the relevant data. | 1 | 0 | 0 | I'm facing a new problem.
I'm writing a scraper for a website, usually for this kind of tasks I use selenium, but in this case I cannot use anything that simulate a web-browser.
Researching on StackOverflow, I read the best solution is to undestand what javascript did and rebuild the request over HTTP.
Yeah, I understand well in theory, but don't know how to start, as I don't know well technologies involved.
In my specific case, some HTML is added to the page when the button is clicked. With developer tools I set a breakpoint on the 'click' event, but from here, I'm literally lost.
Anyone can link some resource and examples I can study? | Python - Rebuild Javascript generated code using Requests Module | 0.379949 | 0 | 1 | 84 |
32,680,603 | 2015-09-20T14:42:00.000 | 3 | 0 | 1 | 0 | python,pip,version,python-module,requirements.txt | 46,266,091 | 1 | false | 0 | 0 | Based on my experience as developer, packager (package maintainer for distributions) and software maintainer I came to the following interpretation / recommendations:
install_requires: The dependencies listed in install_requires are checked during runtime (!) by pkg_resources. They are hard dependencies. They can (should?) contain a required minimum version number, but not an exact version unless very good reasons are given. More supported versions are generally more useful, maximum version numbers are usually a nightmare.
extras_requires list optional requirements (recommendations), which are not needed for the core functionality, but for some extras, or are optional, enhancing the functionality. If a software does not properly work without it, it should go to install_requires.
requirements.txt Some maintainers set it the same with install_requires, some others don't use it at all. It can be used to recommend specific versions of requirements, which are best tested. This is of course not useful at all for packaging, but for setups in virtualenvs and similar.
Packagers usually do not use the information from requirements.txt, but from install_requires and extras_requires. | 1 | 3 | 0 | I'm looking for best practices, do's and don'ts regarding version specifications in requirement's files for pip in python packages.
Assume a python package which depends on some other modules. A minimum version is required for most of them. At least it is know to the maintainers, that the code works with a least, e.g. six 1.7
Now, it is possible to define the requirement in different ways:
six>=1.7.0 The software has been tested with this version and it is assumed that it will also with future versions
six==1.7.0 We require the exact version, the package has been tested with. The software has not been tested with all future versions of the module, thus we can't guarantee it will work for those.
six==1.9.0 We just test it with the most recent version and require it.
I do have an inhibition to require an exact version, as it may break other packages requirements and seems bad practice for me. On the other hand, the package has not been tested with all versions of six above 1.7.0.
Are there any guidelines regarding package version requirements and the usage of == against >=? | Python package requirements: Usage of version specifiers == and >= | 0.53705 | 0 | 0 | 942 |
32,682,020 | 2015-09-20T17:04:00.000 | 0 | 0 | 0 | 0 | python,facebook,api,ads | 32,905,674 | 1 | true | 1 | 0 | You can get the post that is used for a creative by fetching the object_story_id field on an Ad Creative.
In the hierarchy of Facebook ads products, this is located here: Ad Account -> Ad Campaign -> Ad Set -> Ad -> Ad Creative.
There is no way to reversely look up which ads objects are using a particular post for their creative. | 1 | 0 | 0 | I use Python Facebook Ads API.
I got AD_SET_ID.
How Can I find boosted post ID (looks like 204807582871020_615180368500404) which is related to AD_SET_ID | How to find Post_ID which related to Facebook ADS API | 1.2 | 0 | 0 | 502 |
32,682,065 | 2015-09-20T17:10:00.000 | 1 | 1 | 0 | 0 | python,nginx,apache2,server,uwsgi | 32,682,202 | 1 | true | 1 | 0 | Andrew,
I believe that you can move some pieces of your deployment topology.
My suggestion is use nginx for delivering HTTP content, and expose your application using some web framework, i.e. tornadoweb (my preference, considering async core, and best documented if compared to twisted, even twisted being a really great framework)
You can communicate between nginx and tornado by proxy. It is simple to be configured.
You can replicate your service instance to distribute your calculation application inside the same machine and another hosts. It can be easily configured by nginx upstreams.
If you need more performance, you can break your application in small modules and integrate it using Async Messaging. You can choose using zeromq or rabbitmq, among other solutions.
Then, you can have different topologies, gradually applied during the evolution of your application.
1th Topology:
nginx -> tornadoweb
2th Topology:
nginx with loadbalance (upstreams) -> tornadoweb replicated on [1..n] instances
3rd Topology:
[2nd topology] -> your app integrated by messaging (zeromq, amqp(rabbitmq), ...)
My favorite is 3rd, for begining. But, you should start, for this moment, by 1th and 2nd
There are a lot of options. But, these thre may be sufficient for a simple organization of your app. | 1 | 1 | 0 | I have a site, that performs some heavy calculations, using library for symbolic math.
Currently average calculation time is 5 seconds.
I know, that ask too broad question, but nevertheless, what is the optimized configuration for this type of sites? What server is best for this?
Currently, I'm using Apache with mod_wsgi, but I don't know how to correctly configure it.
On average site is receiving 40 requests per second.
How many processes, threads, MaxClients etc. should I set?
Maybe, it is better to use nginx/uwsgi/gunicorn (I'm using python as programming language)?
Anyway, any info is highly appreciated. | Best server configuration for site with heavy calculations | 1.2 | 0 | 0 | 229 |
32,682,293 | 2015-09-20T17:32:00.000 | 9 | 0 | 0 | 0 | python,django,git,migration | 38,489,436 | 3 | false | 1 | 0 | I don't have a good solution to this, but I feel the pain.
A post-checkout hook will be too late. If you are on branch A and you check out branch B, and B has fewer migrations than A, the rollback information is only in A and needs to be run before checkout.
I hit this problem when jumping between several commits trying to locate the origin of a bug. Our database (even in development trim) is huge, so dropping and recreating isn't practical.
I'm imagining a wrapper for git-checkout that:
Notes the newest migration for each of your INSTALLED_APPS
Looks in the requested branch and notes the newest migrations there
For each app where the migrations in #1 are farther ahead than in #2, migrate back to the highest migration in #2
Check out the new branch
For each app where migrations in #2 were ahead of #1, migrate forward
A simple matter of programming! | 1 | 62 | 0 | I'm curious how other django developers manage multiple code branches (in git for instance) with migrations.
My problem is as follows:
- we have multiple feature branches in git, some of them with django migrations (some of them altering fields, or removing them altogether)
- when I switch branches (with git checkout some_other_branch) the database does not reflect always the new code, so I run into "random" errors, where a db table column does not exist anymore, etc...
Right now, I simply drop the db and recreate it, but it means I have to recreate a bunch of dummy data to restart work. I can use fixtures, but it requires keeping track of what data goes where, it's a bit of a hassle.
Is there a good/clean way of dealing with this use-case? I'm thinking a post-checkout git hook script could run the necessary migrations, but I don't even know if migration rollbacks are at all possible. | django migrations - workflow with multiple dev branches | 1 | 0 | 0 | 11,656 |
32,682,336 | 2015-09-20T17:37:00.000 | 3 | 0 | 0 | 0 | python,xlsx,openpyxl | 32,684,112 | 1 | false | 0 | 0 | You have to save the files with the extension ".xlsm" rather than ".xlsx". The .xlsx format exists specifically to provide the user with assurance that there is no VBA code within the file. This is an Excel standard and not a problem with openpyxl. With that said, I haven't worked with openpyxl, so I'm not sure what you need to do to be sure your files are properly converted to .xlsm.
Edit: Sorry, misread your question first time around. Easiest step would be to set keep_vba=False. That might resolve your issue right there, since you're telling openpyxl to look for VBA code that can't possibly exist in an xlsx file. Hard to say more than that until you post the relevant section of your code. | 1 | 1 | 0 | In the environment, we have an excel file, which includes rawdata in one sheet and pivot table and charts in another sheet.
I need to append rows every day to raw data automatically using a python job.
I am not sure, but there may be some VB Script running on the front end which will refresh the pivot tables.
I used openpyxl and by following its online documentation, I was able to append rows and save the workbook. I used keep_vba=true while loading the workbook to keep the VBA modules inside to enable pivoting. But after saving the workbook, the xlsx is not being opened anymore using MS office and saying the format or the extension is not valid. I can see the data using python but with office, its not working anymore. If I don't use keep_vba=true, then pivoting is not working, only the previous values are present (ofcourse as I understood, as VBA script is needed for pivoting).
Could you explain me what's happening? I am new to python and don't know its concepts much.
How can I fix this in openpyxl or is there any better alternative other than openpyxl. Data connections in MS office is not an option for me.
As I understood, xlsx may need special modules to save the VB script to save in the same way as it may be saved using MS office. If it is, then what is the purpose of keep_vba=true ?
I would be grateful if you could explain in more detail. I would love to know.
As I have very short time to complete this task, I am looking for a quick answer here, instead of going through all the concepts.
Thankyou! | xlsx file extension not valid after saving with openpyxl and keep_vba=true. Which is the best way? | 0.53705 | 1 | 0 | 3,002 |
32,682,674 | 2015-09-20T18:12:00.000 | 0 | 0 | 0 | 0 | python,sqlite | 32,700,885 | 1 | false | 0 | 0 | Indeed, inspired by Colonel Thirty Two's comment above, I just realized that I need to wrap all my operations into one transaction. This was trivial to implement and improved overall efficiency drastically. Thanks once again! | 1 | 0 | 0 | I'm using Python 2.7.5 in Spyder along with Sqlite3 3.6.21. I have noticed the execute method to be pretty slow, pretty much regardless of the size of the database I'm creating. After doing some research, no solution really works for me:
Python 3 is not supported by Spyder yet
updating the Sqlite3 version does not work (replacing the dll file causes problems)
Is there a way around this? If any more details are needed, I'm glad to elaborate further. | python sqlite3 execute method slow | 0 | 1 | 0 | 157 |
32,683,912 | 2015-09-20T20:22:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,tkinter,ttk | 32,696,645 | 1 | true | 0 | 1 | The combobox widget has a width attribute which you can use to control the width of the combobox. However, the dropdown menu will always be the same width as the control, so you won't be able to see all of the characters in long items. | 1 | 0 | 0 | I have a tkinter-based gui that has a sidepanel with comboboxes. They are all contained within a frame for easy packing.
Thing is if I put a "long_test_level_for_instance" the width of the combobox grows to accommodate that(whether such a string is selected or not)
What I would like is a fixed width combobox/frame. Is this doable? it was a pain todo it in GTK but I found a way, it is automatic in qt and I am wondering now for tk
--edit--
self.framePLOTSEL.pack_propagate(False) in conjunction with width=...
This however clips the content of the combobox when it is expanded (its expected to be clipped when collapsed) | TK combo boxes & restricting their size | 1.2 | 0 | 0 | 61 |
32,683,920 | 2015-09-20T20:23:00.000 | 0 | 0 | 0 | 0 | python,vim | 32,813,017 | 1 | true | 0 | 0 | I was wrong, <C-R> does allow you to edit the buffer. Ended up using a mapping like inoremap ( <C-R>=OpenChar('(')<cr> where OpenChar calls the python function to do the work then returns "" to not insert anything after. | 1 | 2 | 0 | I'm attempting to write a vim plugin using the vim python module that handles basic bracket completion (I know this exists, it's more of a learning exercise). I've run in to an issue where I'd like to remap '(' to a function in insert mode. C-o is an option, but when it leaves insert mode, it moves the cursor to the left which trashes the cursor position. As far as I can think, it's impossible to tell if the '(' was entered at pos 0 or pos 1, because either will end up with pos of 0 while in the function call.
I've tried implementing it with expr or C-r, but the issue is I'd like to control cursor position after the insert of the bracket (i.e. move to the right of the bracket) and edit the buffer (both of which aren't allowed in when using expr or C-r).
So, is there a way to either call a function in insert mode that allows editing the buffer/changing cursor position? If not, is there a way to capture the cursor position prior to leaving insert mode? | Calling function while in insert mode that allows editing buffer | 1.2 | 0 | 0 | 62 |
32,684,012 | 2015-09-20T20:32:00.000 | 2 | 0 | 1 | 0 | python,pdf-parsing | 32,684,072 | 1 | true | 0 | 0 | pdfminer will get your text. pdfrw (disclaimer: I am the author of pdfrw) has examples that will find images and dump them to separate pages, and also examples that will split PDFs into separate pages, so you could easily extract all the images to separate PDFs. If you run inkscape in a headless mode (e.g. from the subprocess module), it can read in the PDF and output a different format. | 1 | 0 | 0 | I want to parse some pdf files that contains text and may or may not contain images. I want to extract the text portion as string for further processing and save the image as jpeg/png or any other image format. what should be the best module to work with? | Python: parse pdf with images | 1.2 | 0 | 0 | 1,127 |
32,685,466 | 2015-09-20T23:45:00.000 | 0 | 0 | 0 | 1 | python,python-2.7,logging,twisted | 33,092,530 | 1 | true | 0 | 0 | Log files are pretty much inherently blocking things. Eventually Twisted may integrate a non-blocking log consumer with an intelligent buffering policy, but that day is not here yet. Good luck! | 1 | 2 | 0 | I'm using Twisted 14.0.2 and I was wondering what's the correct way of doing a rotating log file without blocking I/O? | What's the correct way to do a rotating log with python twisted? | 1.2 | 0 | 0 | 106 |
32,687,531 | 2015-09-21T04:46:00.000 | 1 | 0 | 1 | 0 | python,loops,memory-management | 32,687,532 | 1 | true | 0 | 0 | The most basic approach is that Python has a built in trace feature which prints every line as it runs python -mtrace --trace script.py, which will effectively shows you which lines are creating the unwanted memory usage. To fix the loop that is causing this a time.sleep(0.1) does the trick. | 1 | 0 | 0 | How can I trace heavy memory usage from a python application that is eating up processing power? | How to trace a loop causing heavy memory usage in Python? | 1.2 | 0 | 0 | 229 |
32,690,561 | 2015-09-21T08:27:00.000 | 0 | 0 | 0 | 0 | python,excel,shell,vbscript | 32,691,381 | 1 | false | 0 | 0 | If you have access to a Linux environment (which you might since you mention shell script as one of your options) then just use sed in a terminal or Putty:
sed -i .bak 's/,/;/g' yourfile.excel
Sed streams the text without loading the entire file at once.
-i will make changes to your original file but providing .bak will create a copy named yourfile.excel.bak first | 1 | 1 | 0 | I have a large Excel file (450mb+). I need to replace (,) -> (; or .) for one of my fastload scripts to work. I am not able to open the file at all. Any script would actually involve opening the file, performing operation, saving and closing the file, in that order.
Will a VB script like that work here for the 450mb+ file, wherein the file is not opening only.
Is there any VB script , Shell script, Python, Java etc I can write actually to perform the replacement(operation) without opening the Excel file?
Or alternatively, is there any way of opening an Excel file that big
and performing that operation. | Replace characters without opening Excel file | 0 | 1 | 0 | 273 |
32,697,663 | 2015-09-21T14:22:00.000 | 0 | 0 | 0 | 0 | python,opencv,face-detection | 32,709,770 | 1 | false | 0 | 0 | Sort the detected faces by size and keep the biggest one only? | 1 | 0 | 1 | I am working on with the Face detection using cascade Classifier in opencv python. Its working fine. But i want to develop my code to detect only one face and also the largest face only to detect. | Face detection for two faces-Cascade opencv python | 0 | 0 | 0 | 216 |
32,698,320 | 2015-09-21T14:51:00.000 | 8 | 1 | 0 | 1 | python,bash | 32,698,394 | 2 | false | 0 | 0 | There is no meaningful way to do this.
UNIX process architecture does not work this way. You cannot control the execution of a script by its parent process.
Instead we should discuss why you want to do something like this, you are probably thinking doing it in a wrong way and what good options there is to address the actual underlying problem. | 1 | 3 | 0 | I am looking for a way to limit how a python file to be called. Basically I only want it to be executable when I call it from a bash script but if ran directly either from a terminal or any other way I do not want it to be able to run. I am not sure if there is a way to do this or not but I figured I would give it a shot. | Limit a python file to only be run by a bash script | 1 | 0 | 0 | 54 |
32,702,029 | 2015-09-21T18:17:00.000 | 2 | 1 | 1 | 0 | python,c++,security,encryption,reverse-engineering | 32,702,204 | 1 | false | 0 | 0 | If your application contains the private key inside of it, then your data will never truly be safe from a motivated hacker (as they can step through the program to find it)...
Or they could run your app in a debugger, pause it after the files have been decoded in memory and then pull the data from memory. | 1 | 0 | 0 | I am in the process of evaluating whether Python is a suitable implementation choice for my program given the security requirements.
The input to my program is an set of encrypted (RSA) text files that describe some I/P that I want to keep secure. The encryption / decryption library and the private key are all accessed via SWIG wrappers to a C++ library. I envision that the Python code will call the library to decrypt the incoming source files.
Once decrypted, I will transform the I/P in some fashion and then write it out encrypted, once again using the SWIG wrapped C++ library for this function.
My program and the I/P will be distributed to customers, but the customers should not be able to examine the I/P. Only tools designated by the I/P author that have the private key should.
Can someone examine the data in its decrypted state as it flows through my program at run-time? Is there a way to protect my data in Python? Is a C++ implementation more secure than a Python one? | Securing Data In A Python Program At Runtime | 0.379949 | 0 | 0 | 89 |
32,703,401 | 2015-09-21T19:50:00.000 | 2 | 0 | 0 | 0 | django,python-2.7,django-1.8,manage.py | 44,451,289 | 2 | true | 1 | 0 | It seems pretty clear that Django is unable to find your database at specified location, reasons can be,
You have created django project using "sudo" or with any other other linux user than your current user, thats why your current user might not have permissions to access that database. You can check permissions of files by typing in your project's root directory,
ls -la
You have configured wrong path for database file in your settings.py | 1 | 1 | 0 | I'm trying to create a project using Django 1.8.4 and Python 2.7.10, but I can't execute the command python manage.py runserver. I can create the project and apps, but can't run the server.
Please somebody help me, I'm new with Python/Django and I couldn't advance more.
The cmd show the next error when the command is executed.
C:\Users\Efren\SkyDrive\UniCosta\VIII\Ingeniería de Software
II\Django\PrimerProyecto>python manage.py runserver Performing system
checks...
System check identified no issues (0 silenced). Unhandled exception in
thread started by Traceback (most
recent call last):
File "C:\Python27\lib\site-packages\django\utils\autoreload.py",
line 225, in wrapper
fn(*args, **kwargs)
File
"C:\Python27\lib\site-packages\django\core\management\commands\runserver.py",
line 112, in inner_run
self.check_migrations()
File
"C:\Python27\lib\site-packages\django\core\management\commands\runserver.py",
line 164, in check_migrations
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
File
"C:\Python27\lib\site-packages\django\db\migrations\executor.py", line
19, in init
self.loader = MigrationLoader(self.connection)
File "C:\Python27\lib\site-packages\django\db\migrations\loader.py",
line 47, in init
self.build_graph()
File "C:\Python27\lib\site-packages\django\db\migrations\loader.py",
line 182, in build_graph
self.applied_migrations = recorder.applied_migrations()
File
"C:\Python27\lib\site-packages\django\db\migrations\recorder.py", line
59, in applied_migrations
self.ensure_schema()
File
"C:\Python27\lib\site-packages\django\db\migrations\recorder.py", line
49, in ensure_schema
if self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()):
File
"C:\Python27\lib\site-packages\django\db\backends\base\base.py", line
162, in cursor
cursor = self.make_debug_cursor(self._cursor())
File
"C:\Python27\lib\site-packages\django\db\backends\base\base.py", line
135, in _cursor
self.ensure_connection()
File
"C:\Python27\lib\site-packages\django\db\backends\base\base.py", line
130, in ensure_connection
self.connect()
File "C:\Python27\lib\site-packages\django\db\utils.py", line 97, in
exit
six.reraise(dj_exc_type, dj_exc_value, traceback)
File
"C:\Python27\lib\site-packages\django\db\backends\base\base.py", line
130, in ensure_connection
self.connect()
File
"C:\Python27\lib\site-packages\django\db\backends\base\base.py", line
119, in connect
self.connection = self.get_new_connection(conn_params)
File
"C:\Python27\lib\site-packages\django\db\backends\sqlite3\base.py",
line 204, in get_new_connection
conn = Database.connect(**conn_params)
django.db.utils.OperationalError: unable to open database file | Python/Django run development server | 1.2 | 0 | 0 | 844 |
32,705,626 | 2015-09-21T22:39:00.000 | 2 | 0 | 1 | 1 | python,sleep,wakeup | 34,420,302 | 2 | true | 0 | 0 | I was unable to accomplish this using just python. However in the Windows SDK they provide a tool called pwrtest that will allow you to do timed sleep cycles. I am able to call this with python and then my script continues when pwrtest wakes the PC up from sleep. | 1 | 2 | 0 | I have a script that will put the system to sleep in the middle of it. Is there any way to make that script wake the system up and then continue running?
I have read many round-about ways of doing so by Wake on LAN or using Task Scheduler. I am looking for something that would wake it up after a set period of time or after a specific piece of my script is finished. I will need to this to work for Windows 7, 8.1, and 10.
Anyone know of a way to wake from sleep while still running a script? | Wake Windows PC from sleep in Python 2.7 | 1.2 | 0 | 0 | 2,597 |
32,706,147 | 2015-09-21T23:36:00.000 | 0 | 1 | 1 | 0 | python,escaping,cgi | 32,706,237 | 3 | false | 0 | 0 | If you know your input is already escaped, unescape it first. Then later escape it just before where it needs to be. | 1 | 0 | 0 | In one of my projects I'm using cgi.escape() to escape a set of titles that I get from a resource. These titles could be from Youtube or anywhere else, and may need to be escaped.
The issue I'm having is that if a title is already escaped from Youtube and I pass it into cgi.escape(), I end up getting double-escaped titles, which is messing up later parts of my project.
Is there a library that will escape strings but check if a piece is already escaped, and ignore it? | Escaping characters in Python, but ignoring already escaped characters | 0 | 0 | 0 | 492 |
32,708,630 | 2015-09-22T04:53:00.000 | 0 | 1 | 1 | 0 | python,global | 32,709,121 | 2 | false | 1 | 0 | Delegate the opening and management of the serial port to a separate daemon, and use a UNIX domain socket to transfer the file descriptor for the serial port to the client programs. | 1 | 3 | 0 | I have 5 different games written in python that run on a raspberry pi. Each game needs to pass data in and out to a controller using a serial connection. The games get called by some other code (written in nodeJS) that lets the user select any of the games.
I'm thinking I don't want to open and close a serial port every time I start and finish a game. Is there anyway to make a serial object instance "global", open it once, and then access it from multiple game modules, all of which can open and close at will?
I see that if I make a module which assigns a Serial object to a variable (using PySerial) I can access that variable from any module that goes on to import this first module, but I can see using the id() function that they are actually different objects - different instances - when they are imported by the various games.
Any ideas about how to do this? | Python - global Serial obj instance accessible from multiple modules | 0 | 0 | 0 | 328 |
32,710,348 | 2015-09-22T07:00:00.000 | 1 | 0 | 0 | 0 | javascript,python,html,cgi | 32,710,858 | 2 | false | 1 | 0 | Well if you don't know what CGI is and find that what you ask for is "far more complicated than it should be", you first have to learn the HTTP protocol, obviously, and that's way to broad for a SO answer.
Basically what you want requires:
an html document
some javascript code, either linked from the document or embedded into it
a web server to serve the document (and the javascript)
a web server (can of course be the same as the previous one) that knows how to invoke your python script and return the result as an HTTP response (you'll probably want a json content type) - this can be done with plain CGI or with a wsgi or fcgi connector, in your case CGI might well be enough.
Then in the browser your javascript code will have to issue a GET request (ajax) every x seconds to the web server hosting the Python script and update the DOM accordingly.
This is all ordinary web programming, and as I said, a basic understanding of the HTTP protocol is the starting point. | 1 | 0 | 0 | I'm trying to accomplish the following:
Pull information from a module through Python. [Accomplished]
Constantly pull information from Python for use in HTML.
Avoid writing the entire HTML/CSS/JS document in print statements.
I've seen the term CGI thrown around, but don't really understand how it works. Simply put, I want to run the Python script which returns an integer value. I then would like to take that integer value into JavaScript so that it may update the designated tag. It should be able to execute the Python script every two seconds, receive the output, then apply it to the page. I do not want to write out the entire HTML document in Python by doing one line at a time, as I've seen some people doing on sites I've found.
It seems like doing something like this is far more complicated than it should be. The page should run, call the script for its output, then give me the output to use. | Python CGI With Data Return | 0.099668 | 0 | 0 | 490 |
32,710,832 | 2015-09-22T07:27:00.000 | 13 | 0 | 1 | 0 | python,indentation,python-idle | 32,711,192 | 2 | true | 0 | 0 | It is Ctrl + [ in IDLE. You can change it to your favorite Shift + Tab in Options -> Configure IDLE - Keys. You need to restart the shell after that. | 1 | 10 | 0 | I am using Python 2.7.10 and am having trouble trying to unindent a block of code. Surely there is some sort of shortcut, instead of having to backspace every individual line?
From the Python Shell, I opened a new file (.py) and that's where I am writing the code.
Shift + Tab does not work. | How do I unindent blocks of code in Python? | 1.2 | 0 | 0 | 45,053 |
32,710,863 | 2015-09-22T07:28:00.000 | 1 | 0 | 1 | 0 | python,windows,pip,easy-install | 32,711,839 | 3 | false | 0 | 0 | That was simple.
Figured out that pip is installed automatically with python 2.10 and upgraded it to solve the problem! | 1 | 2 | 0 | I have a strange situation where I have a normal user account on windows where I have access to the internet and then an admin account without internet access.
Given this, how can I split the installation of easy_install and pip into two steps and get it installed on my machine? | How to download and install pip for python in two steps? | 0.066568 | 0 | 0 | 1,823 |
32,711,302 | 2015-09-22T07:51:00.000 | 4 | 0 | 1 | 0 | python,markdown,psql | 33,656,191 | 4 | true | 0 | 0 | I've faced similar problem when I'm using IPython notebook (Jupyter) for H2O. I installed Anaconda Ver 3.X and when i did with Anaconda 2.7.9 it worked without the JSON error. It's because IPython (Jupyter) doesn't yet support for Version 3.X of Anaconda (Python) | 3 | 8 | 0 | I have recently started working with the ipython notebook. Have created several test scripts for the same.
On opening one of the files (.ipynb) it gives me an error:
" Unreadable Notebook: /home/dev/Feedbacks_exploration.ipynb NotJSONError("Notebook does not appear to be JSON: u''...",) "
This file included fetching data from psql, plotting of a line graph, and a block of markdown.
Can anyone please help guide me hoe to open this file? It has some of the important functions that could be used.
Thanks!!! | ipython: Notebook does not appear to be JSON | 1.2 | 0 | 0 | 19,778 |
32,711,302 | 2015-09-22T07:51:00.000 | 0 | 0 | 1 | 0 | python,markdown,psql | 61,640,393 | 4 | false | 0 | 0 | I got this error once when I tried to use nbformat to read in several notebooks that I was currently running (including the one from which I ran the nbformat commands). The error went away the next time I tried loading the notebooks (I might have saved in between) and hasn't occurred again since. | 3 | 8 | 0 | I have recently started working with the ipython notebook. Have created several test scripts for the same.
On opening one of the files (.ipynb) it gives me an error:
" Unreadable Notebook: /home/dev/Feedbacks_exploration.ipynb NotJSONError("Notebook does not appear to be JSON: u''...",) "
This file included fetching data from psql, plotting of a line graph, and a block of markdown.
Can anyone please help guide me hoe to open this file? It has some of the important functions that could be used.
Thanks!!! | ipython: Notebook does not appear to be JSON | 0 | 0 | 0 | 19,778 |
32,711,302 | 2015-09-22T07:51:00.000 | 7 | 0 | 1 | 0 | python,markdown,psql | 43,266,472 | 4 | false | 0 | 0 | The problem for me was that I had an unresolved merge conflict in my ipython notebook. So I opened the file and deleted one version (HEAD>>>>> and what came after it up until ======= and then the line containing <<<<<<). In short, resolve the merge conflict and you'll be good. | 3 | 8 | 0 | I have recently started working with the ipython notebook. Have created several test scripts for the same.
On opening one of the files (.ipynb) it gives me an error:
" Unreadable Notebook: /home/dev/Feedbacks_exploration.ipynb NotJSONError("Notebook does not appear to be JSON: u''...",) "
This file included fetching data from psql, plotting of a line graph, and a block of markdown.
Can anyone please help guide me hoe to open this file? It has some of the important functions that could be used.
Thanks!!! | ipython: Notebook does not appear to be JSON | 1 | 0 | 0 | 19,778 |
32,712,557 | 2015-09-22T08:53:00.000 | 0 | 0 | 1 | 0 | python,pyenchant | 58,909,625 | 2 | false | 0 | 0 | You can use hunspell for italian it will replace it, and myspell for spanish
sudo apt install hunspell-it Italian
sudo apt install myspell-es Spanish | 1 | 6 | 0 | I am using pyenchant package for spell check in Python. I am able to do it successfully for languages English, French, German.
Also, I want to do it for languages Italian and Spanish. I looked into available dictionaries in enchant using enchant.list_languages() and I got only ['de_DE', 'en_AU', 'en_GB', 'en_US', 'fr_FR'].
I am looking for how to do spell check for Italian and Spanish using enchant package or any other package/techniques.
Thanks. | Pyenchant - Italian and Spanish Languages | 0 | 0 | 0 | 4,538 |
32,714,470 | 2015-09-22T10:25:00.000 | 1 | 0 | 0 | 0 | javascript,python,angularjs,django,cookies | 32,714,567 | 1 | true | 1 | 0 | You won't be able to get cookies on other domain because all cookies are set per domain, this is for security reasons.
If you want to access session and cookies in other domain, you must copy them. You can do it by sending some request with special token (for validation) and create view in django that will fetch data from some storage, based on that token and populate user cookies, so on next request they will be available. | 1 | 1 | 0 | I'm trying to send a cross domain PUT request from AngularJS frontend to Django backend. It's all fine when I'm running on the same domain (frontend at localhost:8000 and backend at localhost:8001), I'm getting my csrftoken from $cookies and can send a successful request. The problem begins when I switch the backend to an external QA server. I get empty $cookies, no sessionid nor csrftoken cookies at all. I ran out of ideas and that's why I'm asking for help here, thanks in advance. | Django not providing CSRF token for AngularJS frontend | 1.2 | 0 | 0 | 49 |
32,714,501 | 2015-09-22T10:27:00.000 | 2 | 0 | 1 | 0 | python,list | 32,714,589 | 4 | false | 0 | 0 | How about just creating a second list holding the positions of each 'tagged' element of the first list thus retaining order in the first one?
tagged=[0,5,8]
And access the tagged elements easily:
tags = [results[i] for i in tagged] | 1 | 0 | 0 | In Python, I have a list of results that occurred in a chronological order:
results = [2,6,4,5,2,3,1,8,4,4]
But the results came from two different machines. I want to treat the data that came from each machine slightly differently.
So perhaps:
results = [2,6,4,5,2,3,1,8,4,4]
where the bold numbers are from machine A and the non-bold numbers are from machine B. I think that I need to some how 'tag' the numbers in some way so that the code can recognize which machine they were from. I realize that I could create two separate lists, but then I would lose the chronology. I need to keep them in the order that they occurred.
So I need to tag the extra information to the results. Is there a standard way to do this? | Tag extra information to differentiate numbers in a list | 0.099668 | 0 | 0 | 51 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.