Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
41,524,979 | 2017-01-07T18:29:00.000 | 1 | 0 | 0 | 0 | python,django,python-2.7 | 41,525,185 | 2 | false | 1 | 0 | Django doesn't say any such thing. Put the static folder wherever you want, as long as you include it in STATICFILES_DIRS so that collectstatic will find it. | 1 | 1 | 0 | For example, if I want to use the same style.css for all my apps, where should I put the static folder? I'm pretty sure Django site says to put it in the application folder? But what if I have multiple apps? | Can you put static folder in project root, rather than app root? | 0.099668 | 0 | 0 | 70 |
41,526,677 | 2017-01-07T21:19:00.000 | 3 | 0 | 0 | 0 | python,python-2.7,opencv,image-processing | 41,527,710 | 2 | true | 0 | 0 | To determin if an image is dark, simply calculate the average intensity and judge it.
The problem for the recognition although is not that the image is dark, but that it has a low contrast. A bright image with the same contrast would yield the same bad results.
Histogram equalization is a method that is used to improve images for human vision. Humans have difficulties to distinguish between very similar intensity values. A problem that a computer does not have, unless your algorithm is somehow made to mimic human vision with all its flaws.
A low contrast image bears little information. There is no image enhancement algorithm in the world that will add any further information.
I won't get into too much detail about image characterization. You'll find plenty of resources online or in text books.
A simple measure would be to calculate the standard deviation of image regions you are interested in. | 1 | 1 | 1 | I have some images i'm using for face recognition.
Some of the images are very dark.
I don't want to use Histogram equalisation on all the images only on the dark ones.
How can i determine if an image is dark?
I'm using opencv in python.
I would like to understand the theory and the implementation.
Thanks | How to determine if an image is dark? | 1.2 | 0 | 0 | 2,224 |
41,528,141 | 2017-01-08T00:26:00.000 | 1 | 1 | 0 | 0 | python,html,python-3.x,web | 44,302,397 | 1 | true | 0 | 0 | So after some good answers and further research, I have found that selenium is the thing that best suits my needs. It works not only with python, but supports other languages as well. If anyone else is looking for something that I had been when I asked the my question, a quick Google search for "selenium" should give them all the information they need about the tool that I found best for what I needed. | 1 | 0 | 0 | Ok, so I've looked around on how to do this and haven't really found an answer that showed me examples that I could work from.
What I'm trying to do is have a script that can do things like:
-Log into website
-Fill out forms or boxes etc.
Something simple that might help me in that I though of would be for example if I could write a script that would let me log into one if those text message websites like textnow or something like that, and then fill out a text message and send it to myself.
If anyone knows a good place that explains how to do something like this, or if anyone would be kind enough to give some guidance of their own then that would be greatly appreciated. | How to have python interact automatically with a web site | 1.2 | 0 | 1 | 58 |
41,528,570 | 2017-01-08T01:40:00.000 | 0 | 0 | 0 | 0 | oracle-cloud-infrastructure,oci-python-sdk | 41,528,571 | 1 | true | 0 | 0 | Bare Metal Cloud Services requires TLS 1.2 connections. Your version of OpenSSL is probably too old and does not support TLS 1.2. Please upgrade your version of OpenSSL and try again. | 1 | 0 | 0 | When trying to use the BMCS Python SDK, I get an SSL/TLS exception. Why?
Exception:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)> | Bare Metal Cloud - Python SDK SSL/TLS exception | 1.2 | 0 | 1 | 177 |
41,529,910 | 2017-01-08T05:50:00.000 | -1 | 0 | 0 | 0 | python,mysql,sqlalchemy | 41,530,033 | 1 | false | 0 | 0 | Yes, you should always have a separate rowid (either int based one-up or UUID). Especially when you get into other aspects of mysql or database DevOps, having that ID field is a lifesaver (e.g., replication or galera clustering). It also makes working with frameworks like django much easier. | 1 | 1 | 0 | I currently have a database setup where there are 5 columns set as the composite primary key which could uniquely identify a row. Should I still have an ID column to identify each row? It seems redundant, although I am not sure of what is standard.
I am using SQLAlchemy. I noticed that when I had the 5 columns as the composite primary key, the table was significantly slower inserting the data from a CSV file, as compared to if I had an ID column. It was half the speed with the column (not sure if this is relevant).
To be clear: My question is, Should I have an ID column alongside the composite primary key, even though the ID column would be redundant? | Can I composite primary key be used in place of an ID primary key? | -0.197375 | 1 | 0 | 199 |
41,532,668 | 2017-01-08T12:26:00.000 | 0 | 0 | 1 | 0 | python,django,virtualenv | 41,533,415 | 1 | true | 1 | 0 | Because of the way virtualenvs work, Django was not 'installed' in said env.
The reason ./manage.py startapp firstapp worked is because the ./ indicated that it was an executable file, and hence handled by the system as such. That way, modules are accessible, as the program is ran outside of the env
Hope this helped! | 1 | 2 | 0 | Before asking, I should mention that this is NOT a problem or a bug, this is just a question to widen my knowledge.
When I was working on a Django project (I'm quite new to all these things so don't beat, please) I activated my virtualenv, installed django module and started a new Django project by using django-admin.py startproject myproject . That worked. But then I cd to myproject and tried to run python manage.py startapp firstapp, and failed.
I caught ImportError: no module named 'django'. The problem was solved by simply typing ./manage.py startapp firstapp . What the heck is that? Why doesn't python command work? Python version is 3.5.2. | Why doesn't "python" command work in virtualenv? | 1.2 | 0 | 0 | 99 |
41,532,714 | 2017-01-08T12:31:00.000 | 0 | 0 | 0 | 0 | python,django,rest | 41,542,658 | 1 | false | 1 | 0 | By creating dedicated API's will help you in future if you decide to build mobile application, that case no need to write new API's as you can consume your existing API's. By doing this your are not repeating your server side codes. Only have to write the client side code. And coming to performance only load time will be more as it has to load HTML, JS and CSS after subsequent requests will be more faster | 1 | 2 | 0 | Let's assume that I want to create a simple todo list web application. I am thinking about creating RESTful web services which will be used my the web application and which will be providing all functionalities concerning managing todo list and tasks. I have two concerns:
Is there a sense in creating dedicated web service assuming web service will be only used by my web application? Web service and web application will be on the same server and there have to be a lot of 'views' duplication. Web application will only delegate request to web service, process the response, and return a web page.
Assuming I decided for this kind of architecture: dedicated web service + web application, what is the best practice for calling web service from views in web application? They will be just two different apps within Django project. request? | Dedicated web service for web app (Python/Django) | 0 | 0 | 0 | 347 |
41,533,444 | 2017-01-08T13:55:00.000 | 2 | 0 | 1 | 0 | python,security,python-requests,virus | 41,533,461 | 1 | true | 0 | 0 | No, merely downloading HTTP data won't install a virus.
A virus needs to be activated too, and requests doesn't do anything with the downloaded data for that to happen. Normally, a virus uses bugs in the browser (or more commonly, a plugin in the browser) to trigger code execution, or by tricking the user into executing the downloaded file. For example, bugs in the Flash player executing a Flash file could be used to run arbitrary code, or the user is tricked into believing they downloaded a document but it is really an executable program. | 1 | 4 | 0 | I'm visiting an unknown and possibly malicious website. Lots of them. Python's requests do not run javascript. Can I get infected? Should I consider using a virtual machine? | Can I get a virus by visiting an unknown website using python's requests package? | 1.2 | 0 | 1 | 1,043 |
41,534,489 | 2017-01-08T15:45:00.000 | 1 | 0 | 0 | 0 | python,numpy | 41,543,451 | 4 | false | 0 | 0 | If you really really want to catch errors that way, initialize your indices with NaN.
IXS=np.full((r,c),np.nan, dtype=int)
That will always raise an IndexError. | 1 | 1 | 1 | I want to preallocate an integer matrix to store indices generated in iterations. In MATLAB this can be obtained by IXS = zeros(r,c) before for loops, where r and c are number of rows and columns. Thus all indices in subsequent for loops can be assigned into IXS to avoid dynamic assignment. If I accidentally select a 0 in my codes, for example, a wrong way to pick up these indices to select elements from a matrix, error can arise.
But in numpy, 0 or other minus values can also be used as indices. For example, if I preallocate IXS as IXS=np.zeros([r,c],dtype=int) in numpy. In a for loop, submatrix specified by the indices assigned into IXS previously can be obtained by X(:,IXS(IXS~=0)) in MATLAB, but the first row/column may be lost if I perform the selection in the same way in numpy.
Further, in a large program with operations of large matrices, preallocation is important in speeding up the computation, and it is easy to locate the error raised by wrong indexing as 0 may be selected in MATLAB. In numpy, if I select an array by for example X[:,IXS[:n]] with wrong n, no error occurs. I have to pay lots of times to check where the error is. More badly, if the final results are not so strange, I may ignore this bug. This always occurs in my program. Thus I have to debug my codes again and again.
I wonder is there a safe way to preallocate such index matrix in numpy? | How do I safely preallocate an integer matrix as an index matrix in numpy | 0.049958 | 0 | 0 | 620 |
41,536,257 | 2017-01-08T18:29:00.000 | 0 | 0 | 0 | 1 | python,mainframe,ftplib | 59,443,599 | 1 | false | 0 | 0 | ftplib ccommand ftp.cwd("'CY01'") works fine. Have been using it for over a year now. | 1 | 0 | 0 | I want to download a dataset from mainframe using Python ftplib.
I am able to login to mainframe and I get default working directory as "CY$$."
I want to change the working directory to "CY01."
I tried using ftp.cwd('CY01.') but it changes the directory to "CY$$.CY01." instead of just "CY01."
While using command prompt I use below command to successfully change working directory:
CD 'CY01.'
(a '.' at end of directory name is IBM command to change default working directory and not append it to defualt directory)
I also tried ftp.sendcmd("CD 'CY01.'") but it gives error "500 unknown command CD"
Can someone please help with changing the defualt working directory?
Thanks in advance. | Accessing Mainframe datasets using FTP form Python | 0 | 0 | 0 | 1,617 |
41,538,435 | 2017-01-08T22:08:00.000 | 0 | 0 | 0 | 0 | python,smartsheet-api | 41,707,000 | 2 | false | 0 | 1 | The GUI indicator will eventually alert you to a change regardless of where the update came from. Sometimes, that change can be a little delayed in the UI itself though. | 1 | 0 | 0 | When Smartsheet is updated by another used using the GUI, another user viewing the sheet will get visual notification that the sheet has been updated - requiring a save and/or refresh.
Is there a way to trigger this functionality from the API.
I'm using the Python SDK 2.0 and Python 3.5
Thanks.
Craig | Is there a way to show the sheet has been updated remotely via the API? | 0 | 0 | 0 | 164 |
41,538,808 | 2017-01-08T22:58:00.000 | 6 | 0 | 0 | 0 | java,python,performance,libgdx,jython-2.7 | 41,544,716 | 1 | true | 0 | 1 | You might have a hard time finding examples of Jython + LibGDX. I would guess it would also be hard to find many people here on SO that have any experience with Jython + LibGDX.
Another issue is cross platform development. Jython might use JVM, but android does not give you JVM. I don't know how well Jython works with Dalvik.
If I remember correctly LibGDX uses Intel Multi-OS Engine for iOS. I have no idea how that works with Jython. In any case, getting help will be hard.
When it comes to performance of Jython + LibGDX vs. java + LibGDX I don't think there is a big difference. On desktop that is, it might not even work on other platforms.
If you want to develop for desktop only, don't need help and is fine with only seeing java examples and tutorials then I would say go for it.
In any other case go with java. The time and energy you would need learning Jython + LibGDX would be much better spend learning java + LibGDX. | 1 | 6 | 0 | I've been developing games in Python & Pygame for a while now, Though one thing that's been in mind is my dislike to pygame's performance and lack of tools and libraries.
I've always known LibGDX for it's popularity and how much I've seen it on this site. Though recently I found out that it supports JVM Languages so that I can use it with python under the Jpython interpreter.
Since I have more knowledge on using python, I'm planning on learning LibGDX for it. Though I already know a decent amount of Java and it wouldn't be an incredibly extra amount of work If I we're to just finish learning more Java.
Though I do prefer Python for how much I've been working in it.
What I'm asking
I was wondering if there was any downsides to using LibGDX In python (JPython) instead of it's main and popular language Java. One that comes to mind are performance issues, would it be slower to develop with LibGDX in Jython than in Java? Another that comes to mind would be cross-platform exportation, are you unable to export to android or IOS using Python(Jython)?
Anybody really knowledge on LibGDX or Jython & LibGDX be able to answer this? | Is LIBGDX Slower in python than Java | 1.2 | 0 | 0 | 1,107 |
41,543,237 | 2017-01-09T07:49:00.000 | 0 | 0 | 1 | 0 | python,inheritance,global-variables | 41,543,332 | 2 | false | 0 | 0 | "Global" variables are only global to the module (i.e., the file) in which they are defined. If you have a function (including a method) in a given file, and that function refers to a global variable gvar, it will always refer to a global variable gvar within that module, not in any other module.
You can, however, modify or create global variables from outside the module. For instance, if you do import blah and then blah.gvar = 2, you have effectively created a global variable gvar inside blah (or changed its value, if it already existed), and code inside blah that refers to gvar will use the value you have set. | 2 | 0 | 0 | How does Python resolve global variables in calls between packages, especially in cases there are global variables with the same name present?
For example. Let there ba a package P1 containing BaseClass. Let there be two packages P2 and P3, containing classes Derived1 and Derived2 that inherit BaseClass correspondingly. Also, let P2 and P3 both contain variable named gvar (for example, defined in their init.py files).
Both of derived classes in their constructors have a call to baseClass constructor through super.
If in BaseClass constructor there is a reference to gvar, what would happen? Is there a way to ensure that during instantiation of Derived1 gvar from P2 would be used?
Why am i bothering with global variables: in my real life case there are tens of classes in P1 and P2, and i would like to avoid changing them all (to add package-specific gvar to their definitions, or adding another common ancestor with it). | How does Python global variables resolution between packages work? | 0 | 0 | 0 | 70 |
41,543,237 | 2017-01-09T07:49:00.000 | 1 | 0 | 1 | 0 | python,inheritance,global-variables | 41,543,360 | 2 | true | 0 | 0 | Globals are per module. Globals in P1, P2 and P3 are all independent.
Python functions also store a pointer to the globals mapping of their module, so even when imported, globals are still looked up in the module they were defined in.
For your example ,that means that a global referenced from P2.Derived1.__init__ is looked up in P2, and a global referenced from P3.Derived2.__init__ is looked up in P3. Both packages would need to import P1.BaseClass, and any globals P1.BaseClass.__init__ might look up are sourced from P1. | 2 | 0 | 0 | How does Python resolve global variables in calls between packages, especially in cases there are global variables with the same name present?
For example. Let there ba a package P1 containing BaseClass. Let there be two packages P2 and P3, containing classes Derived1 and Derived2 that inherit BaseClass correspondingly. Also, let P2 and P3 both contain variable named gvar (for example, defined in their init.py files).
Both of derived classes in their constructors have a call to baseClass constructor through super.
If in BaseClass constructor there is a reference to gvar, what would happen? Is there a way to ensure that during instantiation of Derived1 gvar from P2 would be used?
Why am i bothering with global variables: in my real life case there are tens of classes in P1 and P2, and i would like to avoid changing them all (to add package-specific gvar to their definitions, or adding another common ancestor with it). | How does Python global variables resolution between packages work? | 1.2 | 0 | 0 | 70 |
41,544,873 | 2017-01-09T09:35:00.000 | 2 | 0 | 0 | 0 | python,django,git,heroku,git-push | 41,545,127 | 1 | true | 1 | 0 | /media is for user-uploaded files. These must not be stored in the Heroku filesystem.
Heroku creates a new dyno each time you deploy your code, as well as whenever you run a management command and at irregular times otherwise. Each new dyno has its own filesystem which is not shared with other ones and only contains the files from your git repo.
You must configure Django to upload media files somewhere permanent, such as S3. | 1 | 0 | 0 | Help me please.
I have django app on Heroku.
I want that after $ git push heroku master folder /media/ on Heroku don't change.
Thank you. | How don't change '/media/' on heroku after "git push ..." | 1.2 | 0 | 0 | 125 |
41,545,979 | 2017-01-09T10:32:00.000 | 2 | 0 | 0 | 0 | python,web-scraping | 41,546,162 | 1 | false | 0 | 0 | I think what you could do instead is to wrap requests or urllib modules with a wrapper that logs URLs your app is connecting and then just call real urllib or requests modules functions. So whenever you call import requests you actually import your wrapper. | 1 | 0 | 0 | I have an interesting problem where we need to document all the URLs that our massive Python project calls. It's not feasible to manually go through the code because it's too large and changes often.
Ideally, what I'd like is a piece of script that given a Python project folder can go through all the files in it and find where requests or urllib modules make calls and list the accompanying URL. | List all urls called by requests/urllib in python project | 0.379949 | 0 | 1 | 42 |
41,550,060 | 2017-01-09T14:21:00.000 | 0 | 0 | 0 | 0 | python,excel | 60,421,964 | 5 | false | 0 | 0 | I am not sure if this is what the OP was looking for,but if you have to manipulate data in python without installing any modules (just standard library), you can try the sqlite3 module, which of course allows you to interact with sqlite files (a Relational Database Management System).
These databases are conceptually similar to an Excel file. If an excel file is basically a collection of sheets, with each sheet being a matrix where you can put data, sqlite databases are the same (but each "matrix" is called a table instead).
This format is scripting friendly, as you can read and write data using SQL, but it does not follow the client-server model other DBMS are based on. The whole database is contained in a single file that you can email to a colleague, and you can also install a GUI that gives you a spreadsheet like interface to make it more user-friendly (DB browser for SQLite is avaible for Windows, Linux and Mac).
This allows you to include SQL code in your python scripts, which adds a lot of data processing capabilities, and it is an excellent way to achieve data persistence for simple programs. | 1 | 4 | 0 | I am new to Python.
I use putty to manage some servers. I want to use Python to create a Excel file on each server, for that I think if I can use some orders like ssh ip "python abc.py" to create the file. It is possible to write a bash script to manage all servers. This is the trouble I meet:
The servers can't use internet.
And it is not allowed to use any third party libraries. When a linux(redhat 6.5) was installed, is there any library in python that can be used to create Excel immediately?
Please help me, thanks. | how to create a excel file only with python standard library? | 0 | 1 | 0 | 15,636 |
41,555,224 | 2017-01-09T19:12:00.000 | 2 | 0 | 0 | 1 | python,macos,shell,automator | 41,555,296 | 4 | false | 0 | 0 | Since you have the shebang line, you can do ./my_script.py and it should run with Python 3. | 3 | 5 | 0 | I am trying to use Automator on macOS 10.12 to launch a Python 3 script. The script works just fine when I run it from the terminal with the command: python3 my_script.py.
Automator has a "Run Shell Script" function that uses the /bin/bash shell. The shell will run scripts with the command: python my_script.py, but this only seems to work for scripts written in Python 2.7.
My script starts with #!/usr/bin/env python3, which I thought would direct the shell to the correct python interpreter, but that doesn't seem to be the case.
As a workaround, I can get the script to run if I insert the full path to the python interpreter: /Library/Frameworks/Python.framework/Versions/3.5/bin/python3, but I see this as suboptimal because the commands might not work if/when I update to Python 3.6.
Is there a better way to direct the /bin/bash shell to run Python3 scripts? | python3 won't run from mac shell script | 0.099668 | 0 | 0 | 7,111 |
41,555,224 | 2017-01-09T19:12:00.000 | 1 | 0 | 0 | 1 | python,macos,shell,automator | 54,734,617 | 4 | false | 0 | 0 | You can install Python 3 via Homebrew with brew install python3 and use #!/usr/local/bin/python3 as your shebang.
Not a perfect solution but still better than using the full path of the interpreter. | 3 | 5 | 0 | I am trying to use Automator on macOS 10.12 to launch a Python 3 script. The script works just fine when I run it from the terminal with the command: python3 my_script.py.
Automator has a "Run Shell Script" function that uses the /bin/bash shell. The shell will run scripts with the command: python my_script.py, but this only seems to work for scripts written in Python 2.7.
My script starts with #!/usr/bin/env python3, which I thought would direct the shell to the correct python interpreter, but that doesn't seem to be the case.
As a workaround, I can get the script to run if I insert the full path to the python interpreter: /Library/Frameworks/Python.framework/Versions/3.5/bin/python3, but I see this as suboptimal because the commands might not work if/when I update to Python 3.6.
Is there a better way to direct the /bin/bash shell to run Python3 scripts? | python3 won't run from mac shell script | 0.049958 | 0 | 0 | 7,111 |
41,555,224 | 2017-01-09T19:12:00.000 | -1 | 0 | 0 | 1 | python,macos,shell,automator | 54,741,833 | 4 | false | 0 | 0 | If python refers to Python 2 then that*s what you should expect. Use python3 in the command line, or defer to the script itself to define its interpreter.
In some more detail, make sure the file's first line contains a valid shebang (you seem to have this sorted); but the shebang doesn't affect what interpreter will be used if you explicitly say python script.py. Instead, make the file executable, and run it with ./script.py.
Actually, you can use env on the command line, too. env python3 script.py should work at the prompt, too. | 3 | 5 | 0 | I am trying to use Automator on macOS 10.12 to launch a Python 3 script. The script works just fine when I run it from the terminal with the command: python3 my_script.py.
Automator has a "Run Shell Script" function that uses the /bin/bash shell. The shell will run scripts with the command: python my_script.py, but this only seems to work for scripts written in Python 2.7.
My script starts with #!/usr/bin/env python3, which I thought would direct the shell to the correct python interpreter, but that doesn't seem to be the case.
As a workaround, I can get the script to run if I insert the full path to the python interpreter: /Library/Frameworks/Python.framework/Versions/3.5/bin/python3, but I see this as suboptimal because the commands might not work if/when I update to Python 3.6.
Is there a better way to direct the /bin/bash shell to run Python3 scripts? | python3 won't run from mac shell script | -0.049958 | 0 | 0 | 7,111 |
41,556,981 | 2017-01-09T21:09:00.000 | -1 | 1 | 1 | 0 | python,python-3.x,pip,ubuntu-14.04,setup.py | 41,557,048 | 1 | false | 0 | 0 | Any time I have problems like this, I find that writing the explicit path to the executable helps a great deal, no matter the level I place the executable. So instead of running pip ~do something~ try /etc/pip ~do something~. | 1 | 1 | 0 | can't find anyone posting a similar situation so thought I'd ask and see. Currently trying to automate unit tests within our continuous deployment environment.
We do the typical python setup.py test command from our virtualenv. However, we have our own internal pypi server for some of our internal libraries. pip.conf is configured so when explicitly running pip install it will check the internal pypi server. But when running setup.py test, it tries to use pip to install requirements and appears to not be aware of the pip.conf file. I've place the pip.conf at the global level (/etc/pip.config), virtualenv level, and the user level but to no avail. It's almost like it's calling a different pip, which I would assume would be the base install (not virtualenv), but it ignores the global pip.conf also. Any ideas? Thanks in advance! | running python setup.py test pip not finding pip.conf to install internal requirements | -0.197375 | 0 | 0 | 766 |
41,557,022 | 2017-01-09T21:12:00.000 | 0 | 0 | 1 | 0 | python-3.x,math,matrix | 41,564,872 | 1 | true | 0 | 0 | I'd use Sage if this were a quick hack, and maybe consider using something optimized for GF(2) if the matrices are really big, to ensure that only one bit is used for each entry and that addition of several elements can be accomplished using a single XOR operation. One benefit of working over a finite field is that you don't have to worry about numeric stability, so naive Gauss–Jordan would sound like a good approach. | 1 | 2 | 1 | I'm practicing programming and I would like to know what is the easiest way to solve linear system of equations over the field Z/2Z? I found a problem where I managed to reduce the problem to solve a system of about 2200 linear equations over Z/2Z but I'm not sure what is the easiest way to write a solver for the equations. Is there simpler solution that use nested lists to represent a matrix and then manually write the Gauss–Jordan algorithm? | Is there an easy way to solve a system of linear equations over Z/2Z in Python? | 1.2 | 0 | 0 | 612 |
41,558,204 | 2017-01-09T22:40:00.000 | 0 | 0 | 1 | 0 | python,import,path,anaconda,environment | 47,405,214 | 2 | false | 0 | 0 | Dunno if you still need it, but for future users, once a new environment is created, once must install the specific libraries and modules one wants accessible in that environment.
In this case, once the gl-env is activated, do a conda install wget again. | 1 | 0 | 0 | I have Anaconda 2 with Python 2.7 running on Windows 8. I have installed GraphLab and GraphLab has created a separate environment in Anaconda -- the gl-env environment.
I am facing problems in importing libraries that I have installed successfully via conda through the Windows Commander from the Scripts subdirectory.
For example, I installed wget successfully but when I try to import it from my Jupyter Notebook from the gl-env environment I get an error message that the module does not exist.
I suspect that this error has something to do with the two environments and the PATH content but I do not know enough to figure that out. I made a research in Stackoverflow and it seems that other people have had importing problems relating to different environments but as far as I understand there is not specific advice I can implement.
Your advice will be appreciated. | Can not import library installed in Anaconda 2 while operating with different environments | 0 | 0 | 0 | 480 |
41,558,258 | 2017-01-09T22:45:00.000 | 31 | 0 | 0 | 0 | python,django,git,django-admin | 41,558,803 | 1 | true | 1 | 0 | While you can absolutely check these files in, I typically recommend not checking in the collected static files into git (we use .gitignore to ignore them). Instead, we call collectstatic during our build/deploy steps so that if anyone ever adds new static files, they are collected and copied to the proper output directory for serving by nginx or sent up to s3. If you want to check them into git, I would recommend having collectstatic as a precommit hook so that no one accidentally forgets to run it when adding a new static file. | 1 | 24 | 0 | I ran manage.py collectstatic on my webapp and now I have a new static folder with only Django admin files (so far I've been using just CDNs for my project CSS and JS files). Should I track these admin static files on my git repo? What's the best practise? | Should I add Django admin static files to my git repo? | 1.2 | 0 | 0 | 7,555 |
41,558,760 | 2017-01-09T23:32:00.000 | 3 | 0 | 1 | 0 | python,anaconda,python-idle | 41,565,041 | 2 | false | 0 | 0 | You cannot switch versions of Python from within Python. IDLE runs on top of whatever version of Python, and cannot switch the version running it. You can simultaneously run IDLE 2.7 on Python 2.7 and IDLE 3.5 on Python 3.5.
When you run code from any IDLE editor, it is added your File => Recent files list, which is used for any version of IDLE you run. I frequently pull a file into another running version to see if it runs the same, perhaps after revision for known differences between 2.7 and 3.x.
At least 95% of code that people write runs the same directly in Python (with the -i flag) and IDLE. The IDLE doc, accessible under Help => IDLE Help, notes these differences.
3.2. IDLE-console differences
As much as possible, the result of executing Python code with IDLE is
the same as executing the same code in a console window. However, the
different interface and operation occasionally affects visible
results. For instance, sys.modules starts with more entries.
IDLE also replaces sys.stdin, sys.stdout, and sys.stderr with objects
that get input from and send output to the Shell window. When this
window has the focus, it controls the keyboard and screen. This is
normally transparent, but functions that directly access the keyboard
and screen will not work. If sys is reset with importlib.reload(sys),
IDLE’s changes are lost and things like input, raw_input, and print
will not work correctly.
With IDLE’s Shell, one enters, edits, and recalls complete statements.
Some consoles only work with a single physical line at a time. IDLE
uses exec to run each statement. As a result, 'builtins' is
always defined for each statement.
There are probably a few more equally esoteric things I should add. | 2 | 0 | 0 | I installed python 2.7 and I have the IDLE version of it. I also created two environments using the terminal of Python 3 and Python 2 with conda.
When I type python it shows me that I'm using Python 3.5.2. Now:
How can I switch between two versions in the IDLE or the terminal?
What's the difference between coding in the IDLE or the terminal? | What's the difference between coding in the IDLE and the terminal? | 0.291313 | 0 | 0 | 4,327 |
41,558,760 | 2017-01-09T23:32:00.000 | 0 | 0 | 1 | 0 | python,anaconda,python-idle | 62,862,693 | 2 | false | 0 | 0 | IDLE has this feature where it suggests operations on a variable automatically or by using ctrl+space. But in terminal no such suggestion prompts appear in any case.
Not sure how you can switch versions in terminal. | 2 | 0 | 0 | I installed python 2.7 and I have the IDLE version of it. I also created two environments using the terminal of Python 3 and Python 2 with conda.
When I type python it shows me that I'm using Python 3.5.2. Now:
How can I switch between two versions in the IDLE or the terminal?
What's the difference between coding in the IDLE or the terminal? | What's the difference between coding in the IDLE and the terminal? | 0 | 0 | 0 | 4,327 |
41,559,294 | 2017-01-10T00:25:00.000 | 1 | 0 | 1 | 0 | python,selenium,pycharm,selenium-chromedriver | 56,002,173 | 4 | false | 0 | 0 | You can specific custom PATH variable for chromedriver to PyCharm debug configuration environment variables. | 1 | 1 | 0 | I install chromedriver through my package.json file and it gets installed in my npm_modules folder. Then I add it to the PATH of executables, when running through terminal tests are passing.
When running the same command in pycharm, says that it cannot find the executable:
WebDriverException: Message: 'chromedriver' executable needs to be in PATH.
Im guessing that I have to set it up in a specific way in pycharm.
Thanks | Pycharm not finding executable for chromedriver for selenium | 0.049958 | 0 | 1 | 8,419 |
41,559,470 | 2017-01-10T00:47:00.000 | 2 | 1 | 0 | 0 | python,django,nginx,wsgi,gunicorn | 41,559,760 | 1 | true | 1 | 0 | Gunicorn has a preforking worker model -- meaning that it launches several independent subprocesses, each of which is responsible for handling a subset of the load.
If you're relying on internal application state being consistent across all threads involved in offering your service, you'll want to turn the number of workers down to 1, to ensure that all those threads are within the same process.
Of course, this is a stopgap -- if you want to be able to scale your solution to run on production loads, or have multiple servers backing your application, then you'll want to be modify your system to persist the relevant state to a shared store, rather than relying on content being available in-process. | 1 | 1 | 0 | This is really troublesome for me. I have a telegram bot that runs in django and python 2.7. During development I used django sslserver and everything worked fine. Today I deployed it using gunicorn in nginx and the code works very different than it did on my localhost. I tried everything I could since I already started getting users, but all to no avail. It seems to me that most python objects lose their state after each request and this is what might be causing the problems. The library I use has a class that handles conversation with a telegram user and the state of the conversation is stored in a class instance. Sometimes when new requests come, those values would already be lost. Please has anyone faced this? and is there a way to solve the problem quick? I am in a critical situation and need a quick solution | Python objects lose state after every request in nginx | 1.2 | 0 | 0 | 210 |
41,560,614 | 2017-01-10T03:21:00.000 | 3 | 0 | 0 | 1 | python,airflow,workflow | 64,042,837 | 5 | false | 0 | 0 | This error can be misleading. If hitting refresh button or restarting airflow webserver doesn't fix this issue, check the DAG (python script) for errors.
Running airflow list_dags can display the DAG errors (in addition to listing out the dags) or even try running/testing your dag as a normal python script.
After fixing the error, this indicator should go away. | 2 | 51 | 0 | when I put a new DAG python script in the dags folder, I can view a new entry of DAG in the DAG UI but it was not enabled automatically. On top of that, it seems does not loaded properly as well. I can only click on the Refresh button few times on the right side of the list and toggle the on/off button on the left side of the list to be able to schedule the DAG. These are manual process as I need to trigger something even though the DAG Script was put inside the dag folder.
Anyone can help me on this ? Did I missed something ? Or this is a correct behavior in airflow ?
By the way, as mentioned in the post title, there is an indicator with this message "This DAG isn't available in the webserver DagBag object. It shows up in this list because the scheduler marked it as active in the metdata database" tagged with the DAG title before i trigger all this manual process. | Airflow "This DAG isnt available in the webserver DagBag object " | 0.119427 | 0 | 0 | 25,022 |
41,560,614 | 2017-01-10T03:21:00.000 | 16 | 0 | 0 | 1 | python,airflow,workflow | 51,391,238 | 5 | false | 0 | 0 | Restart the airflow webserver solves my issue. | 2 | 51 | 0 | when I put a new DAG python script in the dags folder, I can view a new entry of DAG in the DAG UI but it was not enabled automatically. On top of that, it seems does not loaded properly as well. I can only click on the Refresh button few times on the right side of the list and toggle the on/off button on the left side of the list to be able to schedule the DAG. These are manual process as I need to trigger something even though the DAG Script was put inside the dag folder.
Anyone can help me on this ? Did I missed something ? Or this is a correct behavior in airflow ?
By the way, as mentioned in the post title, there is an indicator with this message "This DAG isn't available in the webserver DagBag object. It shows up in this list because the scheduler marked it as active in the metdata database" tagged with the DAG title before i trigger all this manual process. | Airflow "This DAG isnt available in the webserver DagBag object " | 1 | 0 | 0 | 25,022 |
41,560,796 | 2017-01-10T03:46:00.000 | 0 | 0 | 1 | 0 | python,numpy,python-3.5 | 41,577,386 | 3 | false | 0 | 0 | Winpython has two size, and the smallest "Zero" size doesn't include numpy | 1 | 1 | 1 | I just installed numpy on my PC (running windows 10, running python 3.5.2) using WinPython, but when i try to import it in IDLE with: import numpy I get the ImportError: Traceback (most recent call last):
File "C:\Users\MY_USERNAME\Desktop\DATA\dataScience1.py", line 1, in <module>
import numpy
ImportError: No module named 'numpy'.
Did I possibly install it incorrectly, or do I need to do something else before it can be used? | Numpy not found after installation | 0 | 0 | 0 | 3,858 |
41,563,696 | 2017-01-10T07:49:00.000 | 0 | 0 | 0 | 0 | python,django,django-class-based-views | 41,565,380 | 1 | false | 1 | 0 | As I said, get is an instance method.
But you are confusing responsibilities here. A class should have one responsibility only. The view class has the responsibility of responding to the request and returning a response; this is quite separate from the connection to the instrument. That should be a separate class, which is instantiated at module level and used within the view class. | 1 | 0 | 0 | I'm currently building a Django app which uses a singleton object.
I want to save this object as a CBV variable because I dont want to initialize it for every 'get' call.
My question in short - can you make a CBV's get function an instance method instead of a classmethod?
And if so, can I save a variable as an instance variable?
EDIT - A better explanation to my question:
I created a class that handles a serialized connection with an electronic measurment instrument.
This class must have only 1 instance (singleton), if another instance will be created a memory leak will crash python.
I want to use it with django in the following way:
Get request to a certain url address ->
The view will ask the instrument class instance for data->
Instance responds with data ->
View returns a JsonResponse with the data.
I think the best way to do it is making the CBV's get method (whose related to the url im getting from) an instance method, but its not such a good practice..
How should I do it? | Making Django's CBV's get an instance method | 0 | 0 | 0 | 65 |
41,566,222 | 2017-01-10T10:08:00.000 | 1 | 0 | 0 | 0 | python,web.py | 41,726,066 | 1 | false | 1 | 0 | Headers aren't part of web.input(), they're part of the "environment".
You can add headers, to be sent to your client using web.header('My-Header', 'header-value').
You can read headers sent by your client using: web.ctx.env.get('MY_HEADER') (Note all-caps, and use of underline rather than dash). | 1 | 1 | 0 | I look after many links to find how can I define my proprietary header in webpy. Can you help me, please. I need to define my own http header like ("X-UploadedFile") and then use it with web.input() | Define own header in webpy | 0.197375 | 0 | 1 | 253 |
41,568,395 | 2017-01-10T12:01:00.000 | 1 | 0 | 0 | 1 | macos,python-3.x,terminal,subprocess | 41,615,003 | 1 | false | 0 | 0 | Thanks for the comments guys but I managed to figure it out.
In the end I used a combination of subprocess.Popen() and os.chdir() and it seems to work using Jupyter Notebook. | 1 | 0 | 0 | I have recently started using a program which has command line interfaces accessed through the Mac Terminal.
I am trying to automate the process whereby a series of commands are passed through the terminal using Python.
So far I have found a way to open the Terminal using the subprocess.Popen command but how do I then "write" in the terminal once it's open ?
For example what I am looking to do is;
1. Open the Terminal App.
2. Select a directory in the App.
3. Run a command. In this instance the file I wish to run is called "RunUX" and what I want to type is "./RunUX ..." followed by command line arguments.
I'm fairly new to Python and programming and appreciate all help !!
Thanks | Manipulating the Terminal Using a Python Script | 0.197375 | 0 | 0 | 379 |
41,569,406 | 2017-01-10T12:58:00.000 | 5 | 1 | 0 | 0 | python-2.7,lasagne | 41,593,958 | 1 | false | 0 | 0 | this can be solved by first import lasagne then import theano..
if exchange the import order,then the error arise...
this is very strange.i am not sure what happens,but it does work | 1 | 5 | 0 | there is some really strange problem.
Traceback (most recent call last):
File "mnist.py", line 17, in
import lasagne.nonlinearities as nonlinearities
File "/usr/local/lib/python2.7/dist-packages/lasagne/init.py", line 17, in
from . import nonlinearities
ImportError: cannot import name nonlinearities
As I go to this folder, I find there is it(the name).but for some unkown reason,(I guess path problem). It does not work.
this may be raised by my mistaking operation, but my mistaking command was not executed.
in detail,originally,my lasagne==0.1.and there is some module can not import..so i solved it by installing the leasted version lasagne==0.2.dev1...then it works.for some reason ,i break my program.before i run it again,i had done some unexecuted mistaking command,now the error is there as you see.i guess it because of two version of lasagne under the path /usr/local/lib/python2.7/dist-packages/..so i uninstall all of them,then i reinstall the one version.but the error is still there..
additional..the following command is ok
python
import lasagne
import lasagne.nonlinearites as nonl | from . import nonlinearities cannot import name nonlinearities | 0.761594 | 0 | 0 | 2,149 |
41,570,318 | 2017-01-10T13:43:00.000 | 5 | 0 | 0 | 0 | python,bokeh | 44,185,703 | 6 | false | 1 | 0 | bokeh can also be run via python -m bokeh
given that, you could open up the Run/Debug Configuration dialog and set your interpreter options to -m bokeh serve --show and your script will run as-is | 1 | 11 | 0 | Bokeh serve allows to write fast web apps with plots and widgets.
How can I debug the python code when I use bokeh serve --show code.py? | Debugging Bokeh serve application using PyCharm | 0.16514 | 0 | 0 | 4,506 |
41,570,359 | 2017-01-10T13:44:00.000 | 10 | 0 | 1 | 0 | python,python-3.x,exe,py2exe,python-3.6 | 52,402,056 | 8 | false | 0 | 0 | Now you can convert it by using PyInstaller. It works with even Python 3.
Steps:
Fire up your PC
Open command prompt
Enter command pip install pyinstaller
When it is installed, use the command 'cd' to go to the working directory.
Run command pyinstaller <filename> | 1 | 176 | 0 | I'm trying to convert a fairly simple Python program to an executable and couldn't find what I was looking for, so I have a few questions (I'm running Python 3.6):
The methods of doing this that I have found so far are as follows
downloading an old version of Python and using pyinstaller/py2exe
setting up a virtual environment in Python 3.6 that will allow me to do 1.
downloading a Python to C++ converter and using that.
Here is what I've tried/what problems I've run into.
I installed pyinstaller before the required download before it (pypi-something) so it did not work. After downloading the prerequisite file, pyinstaller still does not recognize it.
If I'm setting up a virtualenv in Python 2.7, do I actually need to have Python 2.7 installed?
similarly, the only python to C++ converters I see work only up until Python 3.5 - do I need to download and use this version if attempting this? | How can I convert a .py to .exe for Python? | 1 | 0 | 0 | 507,870 |
41,570,818 | 2017-01-10T14:09:00.000 | 4 | 0 | 1 | 0 | python,python-2.7 | 41,570,861 | 4 | false | 0 | 0 | The keys in a dictionary are, because of the dictionary's underlying structure (hash map), unordered. You have to order the keys yourself when you iterate the keys (i.e. by doing sorted(dict.keys()) or some other sorting method - if you want to sort by value, you still have to do that manually). | 1 | 0 | 0 | I have a dictionary, say
{'name4': 380, 'name2': 349, 'name3': 290, 'name1': 294}
I have sorted the dictionary based on the values using the sorted method and the result is a list of tuples
[('name3', 290), ('name1', 294), ('name2', 349), ('name4', 380)]
But, when I try to convert this list of tuples back to dictionary, it's back again to the old structure:
{'name4': 380, 'name2': 349, 'name3': 290, 'name1': 294}
I want the sorted list to be made into dictionary as it is. I have used dict() and manually used for loop to assign the values. But it's again leading to the same result.
Could anyone help me on this? | Converting a sorted list into dictionary in Python | 0.197375 | 0 | 0 | 4,483 |
41,571,973 | 2017-01-10T15:07:00.000 | 2 | 0 | 1 | 0 | bash,server,ipython-notebook | 41,572,158 | 1 | true | 0 | 0 | Preface your ipython command with nohup. Then when the login session terminates that child process will not see the logout or "hang-up" signal that gets sent to all child processes on logout.
See man nohup for details. | 1 | 1 | 0 | Is it possible to run ipython notebook server and close terminal?
If I run ipython notebook server like ipython notebook --no-browser --port=8888 --ip 0.0.0.0 or ipython notebook --no-browser --port=8888 --ip 0.0.0.0 & and then close terminal, server also stopped.
Is there any way to avoid it (to run ipython notebook server for "forever") ? | Run ipython notebook server and close terminal | 1.2 | 0 | 0 | 461 |
41,573,587 | 2017-01-10T16:27:00.000 | 56 | 0 | 1 | 0 | python,virtualenv,virtualenvwrapper,pyenv,python-venv | 65,854,168 | 8 | false | 0 | 0 | Let's start with the problems these tools want to solve:
My system package manager don't have the Python versions I wanted or I want to install multiple Python versions side by side, Python 3.9.0 and Python 3.9.1, Python 3.5.3, etc
Then use pyenv.
I want to install and run multiple applications with different, conflicting dependencies.
Then use virtualenv or venv. These are almost completely interchangeable, the difference being that virtualenv supports older python versions and has a few more minor unique features, while venv is in the standard library.
I'm developing an /application/ and need to manage my dependencies, and manage the dependency resolution of the dependencies of my project.
Then use pipenv or poetry.
I'm developing a /library/ or a /package/ and want to specify the dependencies that my library users need to install
Then use setuptools.
I used virtualenv, but I don't like virtualenv folders being scattered around various project folders. I want a centralised management of the environments and some simple project management
Then use virtualenvwrapper. Variant: pyenv-virtualenvwrapper if you also use pyenv.
Not recommended
pyvenv. This is deprecated, use venv or virtualenv instead. Not to be confused with pipenv or pyenv. | 1 | 1,722 | 0 | Python 3.3 includes in its standard library the new package venv. What does it do, and how does it differ from all the other packages that seem to match the regex (py)?(v|virtual|pip)?env? | What is the difference between venv, pyvenv, pyenv, virtualenv, virtualenvwrapper, pipenv, etc? | 1 | 0 | 0 | 438,843 |
41,573,669 | 2017-01-10T16:31:00.000 | 1 | 0 | 0 | 0 | python-2.7,rally,code-rally,pyral | 41,579,286 | 1 | false | 0 | 0 | Most of the reports on the Reports tab are rendered by a separate Analytics 1 service outside of the standard WSAPI you've been communicating with. Some of that data is available in WSAPI -IterationCumulativeFlowData, ReleaseCumulativeFlowData. What data specifically are you looking for? | 1 | 2 | 0 | I have been working on developing a Rally API using python with the help of links pointed by Rally help (Pyral). My code connects well with the Rally server and pulls specific user story I want, but not the columns I am interested in. I am aiming to pull [full] specific reports with fields such as project, tags, etc. under the 'Reports' tab. I tried to find out how can I do it but didn't get direction. Also, the specific user stories I am able to pull include some weird fields like c_name, c_offer and the like. I would really appreciate if someone could help me through this. Like to connect to a specific project/workspace in Rally we have the following code where it asks the details in the manner below:
rally = Rally(server='', apikey='',workspace='',project='')
Is there any way to specify what report/columns I want?
Thanks in advance | How to connect Rally API (python) to the specific reports under 'Report' section | 0.197375 | 0 | 1 | 346 |
41,575,620 | 2017-01-10T18:15:00.000 | 0 | 0 | 0 | 0 | python,hadoop,graph,random-walk,bigdata | 44,357,542 | 1 | false | 0 | 0 | My understanding is, you need to process large graphs which are stored on file systems. There are various distributed graph processing frameworks like Pregel, Pregel+, GraphX, GPS(Stanford), Mizan, PowerGraph etc.
It is worth taking a look at these frameworks. I will suggest coding in C, C++ using openMPI like which can help achieve better efficiency.
Frameworks in Java are not very memory efficient. I am not sure of API of these frameworks in Python.
It is worth taking a look at blogs and papers which give a comparative analysis of these frameworks before deciding on implementing them. | 1 | 0 | 1 | I am working on a project that involves a RandomWalk on a large graph(too big to fit in memory). I coded it in Python using networkx but soon, the graph became too big to fit in memory, and so I realised that I needed to switch to a distributed system. So, I understand the following:
I will need to use a graph database as such(Titan, neo4j, etc)
A graph processing framework such as Apache Giraph on hadoop/ graphx on spark.
Firstly, are there enough APIs to allow me to continue to code in Python, or should I switch to Java?
Secondly, I couldn't find exact documentation on how I can write my custom function of traversal(in either Giraph or graphx) in order to implement the Random Walk algorithm. | Large graph processing on Hadoop | 0 | 0 | 0 | 480 |
41,577,756 | 2017-01-10T20:25:00.000 | 3 | 0 | 0 | 0 | python,django | 41,578,308 | 1 | true | 1 | 0 | Just don't give them superuser rights. Superuser means they have all rights automatically, which isn't what you want.
Then add edit, add, delete rights for the models they are allowed to edit, add and delete. You can create a group that you give these rights to, then add the users to that group.
If a user doesn't have add, edit or delete rights to a model, the model isn't shown in the admin. | 1 | 0 | 0 | TL;DR I'd like to be able to disable certain models per-user in /admin view.
Specifically: I'm looking to make admin models invisible to some staff users, so that they can have a sort of customized dashboard. There's all sorts of fields that change how to present, search, query, etc. models based on whatever you want, but I can't find anything to allow me to determine whether or not to even show models on the /admin page without resorting to blacklisting individual permissions (of which there are hundreds), and I'd like to be able to make some models only available to superusers and not staff.
Any thoughts?
Thanks! | Django - make admin fields invisible to some users | 1.2 | 0 | 0 | 248 |
41,578,978 | 2017-01-10T21:44:00.000 | -1 | 0 | 1 | 0 | python-3.x,module | 41,579,312 | 1 | true | 1 | 0 | You are getting the error because you used pip. pip basically is for version 2 of python. for using it for pip3, you should first install pip3 package. Then copy all your dependencies to python3 libraries.
Hope that answers the question. | 1 | 0 | 0 | Using the Mac Os Terminal, I downloaded the Jupyter lightning module using "pip install lightning-python" in the /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages directory.
However, when I try to import it, I get the error, No module named 'lightning'
and when I list out the files in the site-packages directory, the module doesn't appear. | Python module installed in site-packages folder, but getting error "No module ..." | 1.2 | 0 | 0 | 443 |
41,586,429 | 2017-01-11T08:55:00.000 | 6 | 0 | 1 | 0 | python,opencv,image-processing | 41,626,482 | 7 | false | 0 | 0 | Thank you everyone. Your ways are perfect. I would like to share another way I used to fix the problem. I used the function os.chdir(path) to change local directory to path. After which I saved image normally. | 1 | 29 | 1 | I'm learning OpenCV and Python. I captured some images from my webcam and saved them. But they are being saved by default into the local folder. I want to save them to another folder from direct path. How do I fix it? | OpenCV - Saving images to a particular folder of choice | 1 | 0 | 0 | 137,529 |
41,588,925 | 2017-01-11T10:53:00.000 | 1 | 0 | 1 | 0 | python,django,python-3.x,pip,python-3.6 | 41,588,975 | 5 | false | 1 | 0 | As is common with these sort of pip issues, before you install, check where pip is pointing to with pip -V.
If that points to Python 2, you can then try pip3 -V; if that points to an older version of Python 3, go for pip3.6.
As a final approach, you can always go through python itself with python3.6 -m pip install ... | 2 | 12 | 0 | If I run pip install Django I get
Requirement already satisfied: Django in
/usr/local/lib/python2.7/dist-packages
I'd like to use python3.6 instead (which is already installed in /usr/bin/python3.6). What's the correct pip syntax to install the last version of Django on python 3.6? | pip install Django on python3.6 | 0.039979 | 0 | 0 | 22,964 |
41,588,925 | 2017-01-11T10:53:00.000 | 4 | 0 | 1 | 0 | python,django,python-3.x,pip,python-3.6 | 41,588,953 | 5 | false | 1 | 0 | If you have pip3 then directly use
pip3 install Django
Else try to use virtualenv for your python version as :
pip -p python3.6 virtualenv name
then you can install any version of Django on it. | 2 | 12 | 0 | If I run pip install Django I get
Requirement already satisfied: Django in
/usr/local/lib/python2.7/dist-packages
I'd like to use python3.6 instead (which is already installed in /usr/bin/python3.6). What's the correct pip syntax to install the last version of Django on python 3.6? | pip install Django on python3.6 | 0.158649 | 0 | 0 | 22,964 |
41,589,611 | 2017-01-11T11:23:00.000 | 2 | 0 | 1 | 0 | python-2.7,image-processing,raspberry-pi,phantom-types | 41,591,155 | 1 | false | 0 | 0 | I would be surprised if a off-the-shelf multicopter would comprise enough processing power to do any reasonable image processing on-board. It wouldn't make sense for the manufacturer.
But I guess it has some video or streaming capabilties or can be equipped with such. Then you can process the data on a remote computer, given that you are in transmission range.
If you have to process on a remote device it doesn't make any sense to request real-time processing. What for? I mean the multicopter can't do anything useful with real-time results and just for mapping or inspection purposes delay doesn't matter.
In general your question cannot be answered as no one can tell you if any hardware is capable of real-time processing without knowing how much there is to process.
To answer the rest of your questions:
You can connect a raspberry pi to the Phantom.
You can use pyhton 2.7 and opencv to write image processing code.
That you ask things like that makes me think that you might not be up to the job. So unless you have a team of talented people I guess it will take you years to come out with a usable and robust solution. | 1 | 0 | 1 | I have a project to detect ripeness of specific fruit, I will use phantom 2 with autopilot feature to fly through fruit trees and capture images then I want to make real time image processing.
I was searching a lot but didn't find the answers for the following questions.
can I use phantom 2 for real time image processing? can I connect
raspberry pi to the phantom? and what I need? can I use python 2.7 +
opencv lib to write image processing codes? | use Phantom 2 for real time image processing | 0.379949 | 0 | 0 | 91 |
41,589,655 | 2017-01-11T11:25:00.000 | 66 | 0 | 1 | 0 | python,python-3.x,python-3.6 | 41,589,841 | 1 | true | 0 | 0 | What does the m stand for in python3.6m?
It signifies that Python was configured --with-pymalloc which enables a specialized implementation for allocating memory that's faster than the system malloc.
How does it differ to non m version?
The non m version is, obviously, not configured with it.
In which case would I prefer to use python3.6m rather than python3.6?
Probably most usefull when writing C extensions, in general it shouldn't be something you should worry about. | 1 | 60 | 0 | What does the m stand for in python3.6m ?
How does it differ to non m version?
In which case would I prefer to use python3.6m rather than python3.6? | What's the difference between python3. and python3.m | 1.2 | 0 | 0 | 24,349 |
41,590,884 | 2017-01-11T12:23:00.000 | 2 | 0 | 0 | 0 | python,pandas | 56,167,288 | 4 | false | 0 | 0 | To simply change one column, here is what you can do:
df.column_name.apply(int)
you can replace int with the desired datatype you want e.g (np.int64), str, category.
For multiple datatype changes, I would recommend the following:
df = pd.read_csv(data, dtype={'Col_A': str,'Col_B':int64}) | 2 | 15 | 1 | I want to sort a dataframe with many columns by a specific column, but first I need to change type from object to int. How to change the data type of this specific column while keeping the original column positions? | Change data type of a specific column of a pandas dataframe | 0.099668 | 0 | 0 | 77,099 |
41,590,884 | 2017-01-11T12:23:00.000 | 26 | 0 | 0 | 0 | python,pandas | 41,591,077 | 4 | false | 0 | 0 | df['colname'] = df['colname'].astype(int) works when changing from float values to int atleast. | 2 | 15 | 1 | I want to sort a dataframe with many columns by a specific column, but first I need to change type from object to int. How to change the data type of this specific column while keeping the original column positions? | Change data type of a specific column of a pandas dataframe | 1 | 0 | 0 | 77,099 |
41,591,079 | 2017-01-11T12:32:00.000 | 0 | 0 | 0 | 0 | python,django,django-1.10 | 41,592,978 | 1 | false | 1 | 0 | Django's ORM might not be the right tool for you if you need to change your schema (or db) online - the schema is defined in python modules and loaded once when Django's web server starts.
You can still use Django's templates, forms and other libraries and write your own custom DB access layer that manipulates a DB dynamically using python. | 1 | 0 | 0 | I am developing a Cloud based data analysis tool, and I am using Django(1.10) for that.
I have to add columns to the existing tables, create new tables, change data-type of columns(part of data-cleaning activity) at the run time and can't figure out a way to update/reflect those changes, in run time, in the Django model, because those changes will be required in further analysis process.
I have looked into 'inspectdb' and 'syncdb', but all of these options would require taking the portal offline and then making those changes, which I don't want.
Please can you suggest a solution or a work-around of how to achieve this.
Also, is there a way in which I can select what database I want to work from the list of databases on my MySQL server, after running Django. | Changing Database in run time and making the changes reflect in Django in run time | 0 | 1 | 0 | 43 |
41,592,506 | 2017-01-11T13:40:00.000 | 1 | 0 | 0 | 0 | python,django,date,datetime | 41,604,291 | 3 | true | 1 | 0 | Django seems to be putting its timezone in the TZ environment variable. Try del os.environ['TZ'] then using tzlocal. | 1 | 3 | 0 | As long as I'm using plain ol' Python shell, the datetime.datetime.now() command works fine to get system's local (non-UTC) time.
But I'm working on a Django project where the time zone is changed in settings.py with TIME_ZONE = 'UTC'.
I've tried many solutions from django.utils timezone to tzlocal module, but none of them works. All of them return either incorrect or UTC time.
All of the solutions work if I change the timezone in settings.py to my local timezone. But I can't do that, so is there any way to bypass the default timezone option in settings.py? Or any way the settings.py's timezone can be automatically updated? If I remove the TIME_ZONE line, I don't know why, but it seems to get a random timezone.
EDIT -
I know that the timezone can be entered manually with pytz, but I don't want to do that. I want to get the local system timezone, but WITHOUT Django's moderation of the timezone.
Thanks. | Get system time w/timezone in Django bypassing default timezone | 1.2 | 0 | 0 | 772 |
41,595,193 | 2017-01-11T15:51:00.000 | 2 | 0 | 1 | 0 | python,python-imaging-library,fractals | 41,595,337 | 1 | true | 0 | 1 | Use matplotlib, wxPython, PyQt, PyGame, Tk/TCL or some other lib to display the image.
Draw as many images as you need, whenever you need, using any lib you need, and then display it on a screen using one of above mentioned or some other GUI libs.
If you are working with plots and math functions, matplotlib will help you most. You might even totally use it, forgoing PIL completely.
If you want to stick to PIL only, you will have to write your own show() function, that will use some external imaging software which will seemlessly change to show another image when you send it. Perhaps Irfan View would do. | 1 | 0 | 0 | I have a python funtion that draws a Fractal to a PIL.Image, but i want to vary the parameters of the function in realtime and to plot it to the screen. How can i plot the image and keep updating the ploted image each time the parametes of the function vary | Draw multiple PIL.Image in python | 1.2 | 0 | 0 | 382 |
41,596,658 | 2017-01-11T17:05:00.000 | 1 | 0 | 0 | 0 | python,pyqt,qaction | 41,606,857 | 2 | true | 0 | 1 | What you are looking for is a toggle button. This is implemented in Qt via the checkable property: if an action is checkable, then when the action is in a button the button is a toggle button; when the action is in a menu item you see a checkmark; etc. | 1 | 0 | 0 | Is there a way to make QAction stay down after it is clicked. Ideally it could toggle between two states: On (down) and Off (up)? | How to make toggle-able QAction | 1.2 | 0 | 0 | 1,747 |
41,599,283 | 2017-01-11T19:34:00.000 | 0 | 1 | 0 | 0 | python,opencv,coordinates,robotics,coordinate-transformation | 41,604,196 | 1 | false | 0 | 0 | Define your 2D coordinate on the board, create a mapping from the image coordinate (2D) to the 2D board, and also create a mapping from the board to robot coordinate (3D). Usually, robot controller has a function to define your own coordinate (the board). | 1 | 0 | 1 | I am new in robotics and I am working on a project where I need to pass the coordinates from the camera to the robot.
So the robot is just an arm, it is then stable in a fixed position. I do not even need the 'z' axis because the board or the table where everything is going on have always the same 'z' coordinates.
The webcam as well is always in a fixed position, it is not part of the robot and it does not move.
The problem I am having is in the conversion from 2D camera coordinates to a 3D robotic arm coordinates (2D is enough because as stated before the 'z' axis is not needed as is always in a fixed position).
I'd like to know from you guys, which one is the best approach to face this kind of problems so I can start to research.
I've found lot of information on the web but I am getting a lot of confusion, I would really appreciate if someone could address me to the right way.
I don't know if this information are useful but I am using OpenCV3.2 with Python
Thank you in advance | Translation from Camera Coordinates System to Robotic-Arm Coordinates System | 0 | 0 | 0 | 1,200 |
41,599,285 | 2017-01-11T19:35:00.000 | -1 | 0 | 0 | 0 | python,sql,security,hash | 41,599,436 | 3 | false | 0 | 0 | Your question doesn't make it clear if you need to display those SSNs. I'm going to assume you do not. Store the SSN in a SHA2 hash. You can then do a SQL query to search against those hashed values. Store only the last 4 digits encrypted for display. | 2 | 2 | 0 | My main problem is that I would like to check if someone with the same SSN has multiple accounts with us. Currently all personally identifiable info is encrypted and decryption takes a non-trivial amount of time.
My initial idea was to add a ssn column to the user column in the database. Then I could simply do a query where I get all users with the ssn or user A.
I don't want to store the ssn in plaintext in the database. I was thinking of just salting and hashing it somehow.
My main question is, is this secure (or how secure is it)? What is there a simple way to salt and hash or encrypt and ssn using python?
Edit: The SSN's do not need to be displayed.
This is using a MySQL database. | How to securely and efficiently store SSN in a database? | -0.066568 | 1 | 0 | 2,477 |
41,599,285 | 2017-01-11T19:35:00.000 | 4 | 0 | 0 | 0 | python,sql,security,hash | 41,600,634 | 3 | true | 0 | 0 | Do not encrypt SSNs, when the attacker gets the DB he will also get the encryption key.
Just using a hash function is not sufficient and just adding a salt does little to improve the security.
Basically handle the SSNs inthe same mannor as passwords.
Instead iIterate over an HMAC with a random salt for about a 100ms duration and save the salt with the hash. Use functions such as PBKDF2 (aka Rfc2898DeriveBytes), password_hash/password_verify, Bcrypt and similar functions. The point is to make the attacker spend a lot of time finding passwords by brute force. Protecting your users is important, please use secure password methods. | 2 | 2 | 0 | My main problem is that I would like to check if someone with the same SSN has multiple accounts with us. Currently all personally identifiable info is encrypted and decryption takes a non-trivial amount of time.
My initial idea was to add a ssn column to the user column in the database. Then I could simply do a query where I get all users with the ssn or user A.
I don't want to store the ssn in plaintext in the database. I was thinking of just salting and hashing it somehow.
My main question is, is this secure (or how secure is it)? What is there a simple way to salt and hash or encrypt and ssn using python?
Edit: The SSN's do not need to be displayed.
This is using a MySQL database. | How to securely and efficiently store SSN in a database? | 1.2 | 1 | 0 | 2,477 |
41,601,083 | 2017-01-11T21:34:00.000 | 0 | 0 | 1 | 0 | python | 41,601,325 | 2 | false | 0 | 0 | You would be able to use a dynamic programming solution to solve this problem, but if you are new to programming in general this would be difficult. You can assume that you would start by placing a tile aligned with a corner of the floor (with two possible orientations), and once this tile is placed it creates two sub-problems of smaller rectangles.
You also know that you can produce n(l) x m(w) and n(w) x m(l) rectangles using your one tile with l x w. I think the largest (by area) such rectangle that completely fits within the boundaries will always be part of the solution. Maybe try to see if you can prove that is always the case? | 1 | 0 | 0 | I'm starting to learn how to program in python and I came across this problem that I can only use these functions:
Basic mathematical and logical operators (+.-, *, /, //, %, **, <. <=, >, >=, ==, !=, and, or, not)
Any functions and constants available in the Math module
min, max, abs,
type
len (for length of strings)
int, str, float conversion functions
round
I'm not looking for an answer more so just how to approach the problem.
The question asks to determine the minimum number of identical tiles all with the same orientation that are required to cover the floor of a rectangular room. Any excess from a tile exceeding the floor area is discarded and cannot be
reused. Write a Python function called min_tiles which consumes 4 positive integers, room_width, room_length, tile_width, tile_length, and produces the
minimum number of tiles required to completely cover the floor of the room.
Here are some examples:
min_tiles(4,4,2,2) => 4
If the tiles are rectangular, they can only be oriented in one direction, not
both. So, if the floor is 3 x 4, and the tiles are 1 x 3, then in one direction it would take 6 tiles to cover the floor (discarding excess pieces), but in the other direction it would only take 4 tiles to cover the floor. You should produce the minimum in this case, which would be 4. Hence min_tiles(3,4,1,3) => 4
Thanks in advance! | Min number of tiles required for a floor | 0 | 0 | 0 | 815 |
41,603,416 | 2017-01-12T00:54:00.000 | 0 | 0 | 1 | 1 | python,dependencies,virtualenv | 41,748,115 | 2 | false | 0 | 0 | Surely the easiest way is simply to modify your Python environment to search another directory where it will find your modified distlib before it picks it up from the stdlib? The classic way to do this is by setting your PYTHONPATH environment variable. No changes required to your Python installation! | 1 | 0 | 0 | I want to do some development on Python's distlib, and in the process run the code via virtualenv which has distlib as a dependency.
That is, not run the process inside a virtualenv, but run virtualenv's code using a custom dependency. What are the steps I need to go through to achieve this?
It seems to me that normal package management (pip) is not possible here. | Running Virtualenv with a custom distlib? | 0 | 0 | 0 | 42 |
41,603,576 | 2017-01-12T01:15:00.000 | 1 | 1 | 0 | 0 | python,unit-testing,selenium,selenium-webdriver,selenium-ide | 41,607,809 | 1 | true | 1 | 0 | If you are recording scripts in Python formatting, those are already converted to unit test cases. Save each scripts and run them in batch mode. | 1 | 0 | 0 | I have recorded some Selenium Scripts using the Selenium IDE Firefox add-on.
I'd like to add these to the unit test cases for my Django project. Is it possible to somehow turn these into a Python unit test case? | Using Selenium Scripts in a Python unit test | 1.2 | 0 | 1 | 81 |
41,605,355 | 2017-01-12T04:53:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,tensorflow,pip | 42,087,865 | 2 | false | 0 | 0 | I have the same error when I run this command. I found error that the installed version of python was x86 and TensorFlow is for x64 versions. I reinstalled the python with x64 version and it works now! I hope this works for you too! | 1 | 4 | 1 | I'm trying to install Tensorflow, and received the following error.
tensorflow-0.12.1-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform.
By reading through other questions, I think I've traced the issue to the cp35 tag not being supported by the version of pip I have installed. What's odd is that I believe I installed python 3.5 and the latest version of pip (9.0.1), but have the following supported tags:
[('cp27', 'cp27m', 'win_amd64'), ('cp27', 'none', 'win_amd64'), ('py2', 'none', 'win_amd64'), ('cp27', 'none', 'any'), ('cp2', 'none', 'any'), ('py27', 'none', 'any'), ('py2', 'none', 'any'), ('py26', 'none', 'any'), ('py25', 'none', 'any'), ('py24', 'none', 'any'), ('py23', 'none', 'any'), ('py22', 'none', 'any'), ('py21', 'none', 'any'), ('py20', 'none', 'any')]
How can I go about modifying the supported tags, or is that even the right approach? | Updating the supported tags for pip | 0 | 0 | 0 | 2,301 |
41,616,099 | 2017-01-12T14:50:00.000 | 2 | 0 | 1 | 0 | python | 41,616,154 | 3 | false | 0 | 0 | Each python installation has a separate set of libraries. Your python 3 does not know about Python 2 and its libraries. It seems the default pip command calls the python2 pip script. Run again the pip install, but with the python3 pip (look for it in your python3 folder, it is probably named pip3) | 1 | 0 | 0 | I have been using PyCharm on Ubuntu to run some Python code, where in Edit Configurations I specified the interpreter path as /usr/bin/python2.7. The code uses the pygame module, and so to install this, I also ran sudo pip install pygame. Then I used import pygame in my Python script, and the file ran ok.
However, I now want to use Python 3.4. So, in PyCharm, I specified the interpreter path to be /usr/bin/python3.4. However, when I run the same file, I now get the error: ImportError: No module named 'pygame'.
Can somebody explain why this is happening? How can I get my Python 3.4 interpreter to find Pygame?
Thanks! | ImportError when changing Python interpreter | 0.132549 | 0 | 0 | 52 |
41,616,426 | 2017-01-12T15:03:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,lmdb | 42,486,747 | 2 | false | 0 | 0 | You can open as many named databases within the same write transaction as you like.
So:
Open write transaction
Open named databases as required and write to them
Commit your transaction
As long as you take into account that you can only ever have one write-transaction at a time (read-only transactions are no problem), and that your other transactions will only see the result of your write-transaction once you commit, you can of course have one long-running write transaction. | 1 | 0 | 0 | I want to write a lot of data to a lmdb data base with several named (sub) data bases. I run into the following problem:
To write to one named data base, I need to open a transaction for this named data base.
This implies: To write to another named data base, I need to open a different transaction.
Two write transaction inside the same main data base cannot exist at the same time.
This implies: I need to commit and close a transaction each time I want to switch from writing to one named data base to writing to another named data base.
Creating and committing write transactions is a really slow operation.
I rather would like to keep one long-running write transaction for all write operations and commit it once --- when all the work is done.
Is this possible with lmdb (if yes, at which point did I err in my analysis)? | lmdb: Can I access different named databases in the same transaction? | 0.099668 | 1 | 0 | 1,421 |
41,616,827 | 2017-01-12T15:22:00.000 | 3 | 1 | 0 | 0 | python,linux,class,protocol-buffers,meta | 41,629,972 | 1 | true | 0 | 0 | Mostly, because it's easier to read.
The code generators for C++ and Java are really hard to understand and edit, because you have to follow both the generator code and the code being generated at the same time.
The Python code generator could have been done the same way. However, because Python is a dynamic language, it's possible to use metaclasses instead. Essentially, this allows most of the code to be constructed at runtime. The metaclass is much easier to read and edit than a code generator because it is all straight Python, with no ugly print statements.
Now, you might argue that Java could have done something similar: Generate very simple classes, and then use reflection to read and write the fields. The problem with that is that Java is a compiled language. Compiled code will perform much better than reflection-based code. Python, however, is not compiled, so there's not much penalty for using a reflection approach (it's slow either way). In fact, because Python is designed to be dynamic, you can do a lot of neat tricks that wouldn't be possible in other languages (but, again, it's slow either way). | 1 | 2 | 0 | protobuf generates C++/Java classes, and these are static typed class, enough for encoding/decoding. Why it generates python classes with metaclass attribute: I would suppose ordinary class will be enough to do rpc, like C++/Java generated classes.
Why python should use dynamic class?
Thanks. | Why protobuf generates python class with __metaclass__ attribute? | 1.2 | 0 | 0 | 357 |
41,618,369 | 2017-01-12T16:33:00.000 | 0 | 0 | 1 | 0 | python,c++,data-structures | 41,618,419 | 1 | true | 0 | 0 | List, set and map (called dictionary), are built-in in python. For others see the collections module (especially the collections.deque is suitable for queues) | 1 | 0 | 0 | What are the equivalent data structures in Python of those in C++? As a contestant I have always used C++ STL in programming contests. So, vector,set,map,queue,priority queue,pair all of these came in handy. Recently, I tried to solve a few problems using python. But could not find same structures. So, what are the similar data structures in Python? If not available, what are the tricks to implement those? | Python Data Structures for programming contest | 1.2 | 0 | 0 | 150 |
41,619,431 | 2017-01-12T17:28:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn,linear-regression | 41,620,561 | 1 | true | 0 | 0 | Yes, using cross validation will give you a better estimate of your model performance.
Splitting randomly(cross validation) will however not work for time-series and/or all distributions of data.
The "final model" will not be better only your estimate on model performance. | 1 | 1 | 1 | I have a dataset with a total of 58 samples. The dataset has two columns "measured signals" and "people_in_area". Due to it, I am trying to train a Linear Regression model using Scikit-learn. For the moment, I splited 75% of my dataset for training and 25% for testing. However, depending on the order in which the data was before the split, I obtain different R-squared values.
I think that as the dataset is small, depending on the order in which the data was before being splited, different values would be kept as x_test and y_test. Due to it, I am thinking on using "Cross-Validation" on my Linear Regression model to divide the test and train data randomly several times, training it more and, also, being able to test more, obtaining in this way more reliable results. Is this a correct approach? | Can I apply Cross Validation in a Linear Regression model? | 1.2 | 0 | 0 | 1,054 |
41,620,292 | 2017-01-12T18:21:00.000 | 1 | 1 | 0 | 0 | python,amazon-web-services,amazon-ec2 | 41,620,629 | 2 | false | 0 | 0 | You can check it from that instance and execute below command
curl http://169.254.169.254/latest/meta-data/security-groups
or from aws-cli also
aws ec2 describe-security-groups | 1 | 1 | 0 | EDIT Removed BOTO from question title as it's not needed.
Is there a way to find the security groups of an EC2 instance using Python and possible Boto?
I can only find docs about creating or removing security groups, but I want to trace which security groups have been added to my current EC2 instance. | How do I list Security Groups of current Instance in AWS EC2? | 0.099668 | 0 | 1 | 842 |
41,620,621 | 2017-01-12T18:41:00.000 | 4 | 0 | 0 | 0 | python,anaconda | 41,620,776 | 2 | true | 0 | 1 | Try libraries traitsUI and Enaml. These are both supported in anaconda and both open-source projects from the company Enthought (many anaconda employees/founders are closely tied to Enthought). These libraries make use of underlying backends (wx, qt, tk) and facilitate much faster GUI dev than does working with those core frameworks directly. | 1 | 1 | 0 | I have been studying python and looking for ways to install and use some GUI framework I can use. I have read of native tkinter, and QtPy, Kivy, wxPython etc, but getting problems installing them.
Recently I read about Anaconda, and want to give it a try. But is it going to solve my issue for GUI framework ? I am seeing some frameworks in the list, however not sure, which one of them is GUI framework. Or is their no GUI framework included ( other than tkinter of course ) | Is there a GUI framework available in anaconda (python )? | 1.2 | 0 | 0 | 25,141 |
41,621,209 | 2017-01-12T19:18:00.000 | 1 | 1 | 0 | 0 | python,apache,cgi | 41,621,284 | 1 | false | 0 | 0 | One thing you could try:
Return a proper HTML header
Print a dot . every few seconds, just to keep the connection alive
disable mod_deflate in your Apache server to prevent HTTP compression
add SetEnv no-gzip to your .htaccess file | 1 | 1 | 0 | i run a cgi script on an apache server (xampp). The script basically runs in an infinte loop. After like 3 minutes i get the error message "script timed out before returning headers". I've searched through the internet and found stuff like:
change MAX_EXECUTION_TIME to 0 -> didn't work
set_time_limit(0) -> didn't work
socket.setdefaulttimeout(0) -> didn't work
I think the error is caused because my script never returns anything to the website, but that's just like it's intended. Basically the script should be started through a website and run until i tell it to stop (it's done with the script constantly checking for a file).
One solution i thought about was a script that restarts my script if it's terminated. But a far more elegant solution would be that the script runs without being terminated by the server.
I hope everything is explained well and somebody can help me because i'm stuck with this problem for far too long and it's starting to annoy me. | script timed out before returning headers cgi | 0.197375 | 0 | 0 | 2,514 |
41,621,666 | 2017-01-12T19:45:00.000 | 2 | 0 | 0 | 0 | python,django,apache | 41,621,883 | 2 | true | 1 | 0 | Neither of these things.
Depending on how your server is configured, it will start up multiple processes and/or threads to handle multiple requests. Each of those will handle a single request at a time; however each process stays alive at the end of a request and continues to run in order to handle subsequent requests. | 2 | 1 | 0 | I'm a little confused as to how Apache manages separate "instances" of a Django application.
Let's say I do the following:
Go to the URL of my Django application
Open up a new browser tab
In the new tab, also go to the URL of my Django application
Are two Python instances started, one for each browser tab?
The application contains a form that a user fills out. After they submit the form, a POST request is sent back to the view. The view then calls another function to do something with the POST data. Let's say I do that in the first browser tab.
While that function is running, if I submit the form now in the second browser tab, will running that function be blocked until the function is done running in the first tab? Or are separate Python instances started?
I'm just trying to figure out if I need to start a separate process each time the function is called from the view, in order to support multiple "instances" of the application (e.g., either in separate browser tabs or multiple users accessing the application simultaneously).
Thanks for any clarification! | Is a separate Python instance started for each instance of a Django application via Apache? | 1.2 | 0 | 0 | 57 |
41,621,666 | 2017-01-12T19:45:00.000 | 0 | 0 | 0 | 0 | python,django,apache | 41,621,708 | 2 | false | 1 | 0 | In the scenario you outlined there should be one instance of Django running on the server. It just happens to be managing two sessions. | 2 | 1 | 0 | I'm a little confused as to how Apache manages separate "instances" of a Django application.
Let's say I do the following:
Go to the URL of my Django application
Open up a new browser tab
In the new tab, also go to the URL of my Django application
Are two Python instances started, one for each browser tab?
The application contains a form that a user fills out. After they submit the form, a POST request is sent back to the view. The view then calls another function to do something with the POST data. Let's say I do that in the first browser tab.
While that function is running, if I submit the form now in the second browser tab, will running that function be blocked until the function is done running in the first tab? Or are separate Python instances started?
I'm just trying to figure out if I need to start a separate process each time the function is called from the view, in order to support multiple "instances" of the application (e.g., either in separate browser tabs or multiple users accessing the application simultaneously).
Thanks for any clarification! | Is a separate Python instance started for each instance of a Django application via Apache? | 0 | 0 | 0 | 57 |
41,624,151 | 2017-01-12T22:27:00.000 | 3 | 0 | 0 | 0 | python,kivy | 41,625,421 | 1 | true | 0 | 1 | I'm not sure if it's causing your problem, but your Rotate instructions aren't bounded by the widget rule and will affect any later widgets - so the Rotate of each Critter is applied to every later one.
To avoid this, add PushMatrix: at the top of the canvas rule and PopMatrix: at the bottom. These instructions effectively save and later revert to the initial rotation state before your change. | 1 | 2 | 0 | I have been playing around with the Kivy Pong tutorial, getting up to speed with the framework, seeing if I could implement a few ideas. I have removed most of the Pong functionality, so I could have only bouncing ball on the screen and added some code to generate multiple bouncing balls on the screen, generated on touch. That worked fine. I then added some extra canvas instructions, so I would have a line drawn indicating the direction the ball is moving. This is where things got weird. The first ball acts just as it should, bouncing around the screen. But any following clicks generate balls that go off screen, randomly change direction and speed and in general behave chaotically. I have been looking at my code and I cannot seem to find any indication of what might be going wrong. I keep all the references to the widgets, I add them to the root widget, I don't seem to be sharing any information between them... Anyway, here is the code, maybe someone can enlighten me. Using latest kivy and python 3.6.
from random import randint
from kivy.app import App
from kivy.clock import Clock
from kivy.config import Config
from kivy.vector import Vector
from kivy.uix.widget import Widget
from kivy.properties import AliasProperty, ListProperty, NumericProperty, ReferenceListProperty
class Playground(Widget):
critters = ListProperty([])
def update(self, dt):
for critter in self.critters:
critter.move()
if (critter.y self.height):
critter.v_y *= -1
if (critter.x self.width):
critter.v_x *= -1
self.score.text = "{}".format(len(self.critters))
def on_touch_down(self, touch):
critter = Critter()
critter.pos = touch.x, touch.y
self.critters.append(critter)
self.add_widget(critter)
class Critter(Widget):
angle = NumericProperty(0)
v_x = NumericProperty(0)
v_y = NumericProperty(0)
velocity = ReferenceListProperty(v_x, v_y)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.velocity = Vector(5, 0).rotate(randint(0, 360))
self.angle = Vector(*self.velocity).angle(Vector(1, 0))
def move(self):
self.pos = Vector(*self.velocity) + self.pos
self.angle = Vector(*self.velocity).angle(Vector(1, 0))
class WorldApp(App):
def build(self):
game = Playground()
Clock.schedule_interval(game.update, 1.0/60.0)
return game
if __name__ == '__main__':
Config.set('kivy', 'desktop', 1)
Config.set('kivy', 'exit_on_escape', 1)
Config.set('graphics', 'resizable', 0)
WorldApp().run()
and the KV file
<Playground>
score: score
canvas:
Color:
rgb: 0.0, 0.1, 0.0
Rectangle
pos: self.pos
size: self.size
Label:
id: score
pos: self.parent.width - self.size[0], self.parent.height - self.size[1]
font_size: 16
size: self.texture_size
<Critter>
size: 30, 30
canvas:
Rotate:
angle: self.angle
origin: self.center
axis: 0, 0, 1
Color:
rgb: 0.5, 0.0, 0.0
Ellipse:
pos: self.pos
size: self.size
Color:
rgb: 1, 1, 0.0
Line:
width: 2
points: self.center[0], self.center[1], self.center[0] + self.size[0] / 2, self.center[1] | Kivy widgets behaving erratically | 1.2 | 0 | 0 | 706 |
41,627,247 | 2017-01-13T04:08:00.000 | 7 | 0 | 0 | 0 | python,download,ipython,jupyter-notebook,jupyter | 46,266,094 | 4 | false | 0 | 0 | The download option did not appear for me.
The solution was to open the file (which could not be correctly read as it was a binary file), and to download it from the notebook's notepad. | 1 | 24 | 1 | I'm using ipython notebook by connecting to a server
I don't know how to download a thing (data frame, .csv file,... for example) programatically to my local computer. Because I can't specific declare the path like C://user//... It will be downloaded to their machine not mine | Download data from a jupyter server | 1 | 0 | 0 | 45,980 |
41,633,039 | 2017-01-13T10:59:00.000 | 6 | 1 | 1 | 0 | python,debugging,assembly,reverse-engineering,obfuscation | 41,637,182 | 3 | false | 0 | 0 | There's no way to make anything digital safe nowadays.
What you CAN do is making it hard to a point where it's frustrating to do it, but I admit I don't know python specific ways to achieve that. The amount of security of your program is not actually a function of programsecurity, but of psychology.
Yes, psychology.
Given the fact that it's an arms race between crackers and anti-crackers, where both continuously attempt to top each other, the only thing one can do is trying to make it as frustrating as possible. How do we achieve that?
By being a pain in the rear!
Every additional step you take to make sure your code is hard to decipher is a good one.
For example could you turn your program into a single compiled block of bytecode, which you call from inside your program. Use an external library to encrypt it beforehand and decrypt it afterwards. Do the same with extra steps for codeblocks of functions. Or, have functions in precompiled blocks ready, but broken. At runtime, utilizing byteplay, repair the bytecode with bytes depending on other bytes of different functions, which would then stop your program from working when modified.
There are lots of ways of messing with people's heads and while I can't tell you any python specific ways, if you think in context of "How to be difficult", you'll find the weirdest ways of making it a mess to deal with your code.
Funnily enough this is much easier in assembly, than python, so maybe you should look into executing foreign code via ctypes or whatever.
Summon your inner Troll! | 2 | 4 | 0 | I'm creating a program in python (2.7) and I want to protect it from reverse engineering.
I compiled it using cx_freeze (supplies basic security- obfuscation and anti-debugging)
How can I add more protections such as obfuscation, packing, anti-debugging, encrypt the code recognize VM.
I thought maybe to encrypt to payload and decrypt it on run time, but I have no clue how to do it. | protect python code from reverse engineering | 1 | 0 | 0 | 12,365 |
41,633,039 | 2017-01-13T10:59:00.000 | 3 | 1 | 1 | 0 | python,debugging,assembly,reverse-engineering,obfuscation | 41,635,003 | 3 | false | 0 | 0 | Story time: I was a Python programmer for a long time. Recently I joined in a company as a Python programmer. My manager was a Java programmer for a decade I guess. He gave me a project and at the initial review, he asked me that are we obfuscating the code? and I said, we don't do that kind of thing in Python. He said we do that kind of things in Java and we want the same thing to be implemented in python. Eventually I managed to obfuscate code just removing comments and spaces and renaming local variables) but entire python debugging process got messed up.
Then he asked me, Can we use ProGuard? I didn't know what the hell it was. After some googling I said it is for Java and cannot be used in Python. I also said whatever we are building we deploy in our own servers, so we don't need to actually protect the code. But he was reluctant and said, we have a set of procedures and they must be followed before deploying.
Eventually I quit my job after a year tired of fighting to convince them Python is not Java. I also had no interest in making them to think differently at that point of time.
TLDR; Because of the open source nature of the Python, there are no viable tools available to obfuscate or encrypt your code. I also don't think it is not a problem as long as you deploy the code in your own server (providing software as a service). But if you actually provide the product to the customer, there are some tools available to wrap up your code or byte code and give it like a executable file. But it is always possible to view your code if they want to. Or you choose some other language that provides better protection if it is absolutely necessary to protect your code. Again keep in mind that it is always possible to do reverse engineering on the code. | 2 | 4 | 0 | I'm creating a program in python (2.7) and I want to protect it from reverse engineering.
I compiled it using cx_freeze (supplies basic security- obfuscation and anti-debugging)
How can I add more protections such as obfuscation, packing, anti-debugging, encrypt the code recognize VM.
I thought maybe to encrypt to payload and decrypt it on run time, but I have no clue how to do it. | protect python code from reverse engineering | 0.197375 | 0 | 0 | 12,365 |
41,634,436 | 2017-01-13T12:14:00.000 | 0 | 0 | 0 | 0 | python,json,python-requests | 41,634,715 | 1 | true | 0 | 0 | You could answer your question quite simpply by reading the source code. But anyway: response.json() does read the response's content, obviously - it's just a convenient shortcut for json.loads(response.content). | 1 | 0 | 0 | I read the following on python-requests website:
Note that connections are only released back to the pool for reuse once all body data has been read; be sure to either set stream to False or read the content property of the Response object.
But as I use the object returned by req.json() and doesn't use req thereafter. I wonder when is the connection released? I don't really know how to check that for sure too.
Many thanks | In requests-python, when is connection released when using req_json = req.json()? | 1.2 | 0 | 1 | 59 |
41,634,658 | 2017-01-13T12:27:00.000 | 0 | 0 | 1 | 1 | osx-mountain-lion,python-idle,python-3.6 | 41,644,807 | 1 | false | 0 | 0 | I am not exactly sure what you are asking, and whether it has anything to do with OSX, but I can explain IDLE. IDLE has two types of main window: a single Shell and multiple Editor windows.
Shell simulates python running in the interactive REPL mode that you get when you enter 'python' (or 'python3') in a console or terminal window. (The latter depends on the OS.) You enter statements at the >>> prompt. A single-line statement is run when you hit Enter (or Return). A multi-line statement is run when you hit Enter twice. This is the same as in interactive Python.
Editor windows let you enter a multi-statement program. You run the programs by selecting Run and Run module from the menu or by hitting the shortcut key, which by default is F5 (at least on Windows and Linux). This runs the program much the same as if you enter python -i myprogram.py in a console. Program output and input goes to and is received from the Shell window. When the program ends, Python enters interactive mode and prints an interactive prompt (>>>). One can then interact with the objects created by the program.
You are correct that Run does not appear on the menu bar of the Shell. It is not needed as one runs a statement with the Enter key. | 1 | 0 | 0 | Probably a very simple question. I just thought, after someone suggested it here, of trying (and installing) Python 3.6 on a Mac - I've been happily using 2.7 since now. I've never used the IDLE before having done everything via the command line + ATOM to write the program.
I see that 'normally' you should be able to write your program in the shell and then run it in the RUN window. However, I don't see a RUN mode in window, just the possibility of using, which you are anyhow, the shell window. I hope that makes sense!
Is this normal, or have I missed something?
p.s. I'm using OS X 10.8, if that's of any importance. | Run mode not there (IDLE Python 3.6) | 0 | 0 | 0 | 3,646 |
41,635,899 | 2017-01-13T13:37:00.000 | 0 | 0 | 0 | 0 | python-2.7,pygame | 41,636,016 | 1 | false | 0 | 1 | Strange!
I've fixed the issue by giving my platform game script a new name and deleting the *.pyc file that was generated.
Good now I can get on with making the game!
Ant | 1 | 0 | 0 | I've got an issue with Python2.7 and PyGame. It has only just started happening so not sure what's going on.
I've been coding a simple platform game and upon running the script it would immediately print out "160 20" (without quotes) and then start the PyGame script. On exiting the script using the "esc" key it crashed and a "python.exe has stopped working" dialog box appeared. I've also noticed that to exit the script while it's running I have to press "ctrl-c" twice as though there are two scripts running.
The funny thing is that this only seems to happen if set_mode is called in the script.
Another problem started when I decided to mess about with fullscreen. I used "DS = pygame.display.set_mode((W, H), FULLSCREEN|HWSURFACE|DOUBLEBUF)" and was able to get the game into fullscreen, now however any script I run with set_mode in automatically goes into fullscreen regardless of the parameters.
Totally bizarre!
Any thoughts?
PS. I tried uninstalling both PyGame and Python and then re-installing.
Ant | Python2.7 PyGame set_mode issue and crash on script termination | 0 | 0 | 0 | 34 |
41,639,740 | 2017-01-13T16:58:00.000 | 7 | 0 | 1 | 0 | python,batch-file,python-idle | 53,207,276 | 5 | false | 0 | 0 | right click on the file->open with->choose default program->more options->select python.exe file and click on. | 4 | 16 | 0 | I have really annoying problem, I cannot run a Python file just by double-clicking.
I have tried to set it to open the file with idle.bat but that only starts IDLE editor on double-click, it does not run the Python file. | Running Python file by double-click | 1 | 0 | 0 | 67,480 |
41,639,740 | 2017-01-13T16:58:00.000 | 0 | 0 | 1 | 0 | python,batch-file,python-idle | 68,357,043 | 5 | false | 0 | 0 | You can also start a Django app this way. Once the Django server starts it enters a "wait" kind of mode so a batch file only requires two lines:
ECHO ON
python manage.py runserver
Manage.py can be in any directory, just keep the full folder path in the command within the batch file:
ECHO ON
python C:\temp\manage.py runserver | 4 | 16 | 0 | I have really annoying problem, I cannot run a Python file just by double-clicking.
I have tried to set it to open the file with idle.bat but that only starts IDLE editor on double-click, it does not run the Python file. | Running Python file by double-click | 0 | 0 | 0 | 67,480 |
41,639,740 | 2017-01-13T16:58:00.000 | 5 | 0 | 1 | 0 | python,batch-file,python-idle | 41,639,878 | 5 | false | 0 | 0 | Right click the file, select open with. If you want to simply run the script, find python.exe and select it. If you want to debug with IDLE, find that executable and select it. | 4 | 16 | 0 | I have really annoying problem, I cannot run a Python file just by double-clicking.
I have tried to set it to open the file with idle.bat but that only starts IDLE editor on double-click, it does not run the Python file. | Running Python file by double-click | 0.197375 | 0 | 0 | 67,480 |
41,639,740 | 2017-01-13T16:58:00.000 | 1 | 0 | 1 | 0 | python,batch-file,python-idle | 56,283,517 | 5 | false | 0 | 0 | When I had both Py2 and Py3, and then removed the former, my script wouldn't run by double-clicking it either (but fine from console.) I realized my __pycache__ folder (same directory as the script) was the issue. Problem solved when deleted. | 4 | 16 | 0 | I have really annoying problem, I cannot run a Python file just by double-clicking.
I have tried to set it to open the file with idle.bat but that only starts IDLE editor on double-click, it does not run the Python file. | Running Python file by double-click | 0.039979 | 0 | 0 | 67,480 |
41,639,782 | 2017-01-13T17:01:00.000 | 1 | 1 | 0 | 0 | python,r,snakemake | 41,672,695 | 1 | true | 0 | 0 | I'm afraid not. This has performance reasons on (a) local systems (circumventing the Python GIL) and (b) cluster systems (scheduling to separate nodes).
Even if there was a solution on local machines, it would need to take care that no sessions are shared between parallel jobs. If you really need to safe that time, I suggest to merge those scripts. | 1 | 1 | 1 | I'm currently building my NGS pipeline using Snakemake and have an issue regarding the loading of R libraries. Several of the scripts, that my rules call, require the loading of R libraries. As I found no way of globally loading them, they are loaded inside of the R scripts, which of course is redundant computing time when I'm running the same set of rules on several individual input files.
Is there a way to keep one R session for the execution of several rules and load all required libraries priorly?
Cheers,
zuup | Globally load R libraries in Snakemake | 1.2 | 0 | 0 | 239 |
41,640,308 | 2017-01-13T17:34:00.000 | 0 | 1 | 1 | 0 | python,token,primitive | 41,640,398 | 1 | true | 0 | 0 | It depends on the context, but usually:
primitives refers to built in data types of a language, that is, types that you can represent without creating an object. In python (and most other languages) such types are booleans, strings, floats, integers
tokens refers to a "word" (anything between spaces): identifiers, string/number literals, operators. They are used by the interpeter/compiler | 1 | 2 | 0 | I am doing MIT6.00.1x course from edX and in it, Professor Grimson talks about primitives of a programming language.
What does it actually mean and secondly, how it is different from the tokens of a programming language?
Please answer with reference to Python language. | Primitive operations provided by a programming language? | 1.2 | 0 | 0 | 1,225 |
41,640,348 | 2017-01-13T17:36:00.000 | 0 | 0 | 1 | 0 | python-3.x,sony-camera-api | 41,675,249 | 1 | false | 0 | 0 | I noticed that when I call the method "actTakePicture", the response returns the URL of postview image within result but I can't find any stored picture in the memory card (blank card as it was!). So I changed this card with another, then everything backs to normal.
I also realized that once I formatted the first memory card and copied some file in it (just to make sure the card wasn't write protected), it resolved my problem (no more errors from "startMovieRec") and I had pictures and videos stored. | 1 | 0 | 0 | I'm getting shooting fail error (code error: 40400) each time I try to start movie recording. I checked "getShootMode" returns movie and the "cameraStatus" is IDLE. What can cause this error?
All the other APIs works just fine! | "Shooting fail" error as response to "startMovieRec" request from Sony DSC QX10 camera? | 0 | 0 | 0 | 86 |
41,642,612 | 2017-01-13T20:03:00.000 | 2 | 0 | 0 | 0 | python,amazon-web-services,cassandra | 43,731,632 | 2 | false | 0 | 0 | Check if the query from your python driver is using upper case letters for the keyspace name - change it to lower case | 1 | 1 | 0 | I have a small Cassandra cluster hosted on AWS that I want to connect to using the python drivers. Unfortunately I get "Keyspace does not exist" when trying to connect to it from one specific pc. The strange thing is that keyspace exists and I can connect to itfrom other pcs. And I can find that keyspace on that server in cqlsh. How do I fix this error? I've looked into the cassandra version, 3.7.1 which should work fine with my updated python driver. The error is reliably repeatable on that pc. And I can reliably connect to that keyspace on other pcs. | Cassandra cluster returns incorrect error "Keyspace does not exist" when connecting from one specific pc | 0.197375 | 0 | 1 | 2,129 |
41,646,514 | 2017-01-14T03:29:00.000 | 8 | 0 | 0 | 0 | python,windows,amazon-web-services,amazon-s3,boto3 | 41,682,857 | 2 | false | 0 | 0 | The issue was actually being caused by the system time being incorrect. I fixed the system time and the problem is fixed. | 1 | 6 | 0 | I wrote a python script to download some files from an s3 bucket. The script works just fine on one machine, but breaks on another.
Here is the exception I get: botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden.
I am pretty sure it's related to some system configurations, or something related to the registry, but don't know what exactly. Both machines are running Windows 7 and python 3.5.
Any suggestions. | Trying to access a s3 bucket using boto3, but getting 403 | 1 | 1 | 1 | 8,191 |
41,652,978 | 2017-01-14T17:38:00.000 | 0 | 0 | 0 | 0 | python,tkinter | 41,653,041 | 2 | false | 0 | 1 | You could create a couple of variables that hold the size of the screen. then replace (0,0) with (self.screenWidth-0, self.sceenHeight-0) | 1 | 3 | 0 | When I usually create a canvas, the (0, 0) coord is place on the top left corner of it. Now I want to set it on the bottom left corner. I think I have to set the "scrollbarregion" but I can't understand how to do it.
Can someone explain? | Tkinter: Set 0, 0 coords on the bottom of a canvas | 0 | 0 | 0 | 4,720 |
41,656,960 | 2017-01-15T01:40:00.000 | 2 | 0 | 1 | 0 | python,keyword | 41,656,994 | 1 | true | 0 | 0 | For example you can use pass as placeholder for code, which you want to write later and syntactically you need some statement there. | 1 | 0 | 0 | What is the use of the "pass" keyword in Python? I don't know why or in what instance you would need to use it, since it literally does nothing. Can anyone give me a reason to use these other than for testing? It serves as a good placeholder, true, but does it have any other uses? | Use of "Pass" keyword in Python | 1.2 | 0 | 0 | 3,105 |
41,661,599 | 2017-01-15T13:27:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,multiprocessing | 41,663,040 | 1 | true | 0 | 0 | There's no difference between starting processes in two terminals or using multiprocessing:
when you open two python consoles you have two processes with their pid
when you run two multiprocessing processes they are forked (on Linux) or started as separate python instance (Windows) and thus run as independent processes.
What the OS does with this processes is beyond your control. If both processes use a lot of CPU resources and there is only little other processes they will be spread across cores. | 1 | 0 | 0 | I have a more general beginners question about multiprocessing in Python (please forgive me if I'm utterly wrong in the following). Let's assume I launch two ore more Ipython consols in parallel and run some independent functions/scripts via those consols, does that meant these tasks are performed on multiple cores (one core per task)? If yes, would it be better to collect the tasks in a "main module" and use the multiprocessing library? | Python - Multiprocessing via Different IPython Consols | 1.2 | 0 | 0 | 37 |
41,662,821 | 2017-01-15T15:27:00.000 | 0 | 1 | 1 | 1 | python,linux,centos | 41,662,958 | 3 | false | 0 | 0 | There is no intrinsic reason why Python should be different from any other scripting language here.
Here is someone else using python in init.d: blog.scphillips.com/posts/2013/07/… In fact, that deals with a lot that I don't deal with here, so I recommend just following that post. | 1 | 2 | 0 | I'm trying to make a Python script run as a service.
It need to work and run automatically after a reboot.
I have tried to copy it inside the init.d folder, But without any luck.
Can anyone help?(if it demands a cronjob, i haven't configured one before, so i would be glad if you could write how to do it)
(Running Centos) | How to run python script at startup | 0 | 0 | 0 | 6,891 |
41,664,257 | 2017-01-15T17:43:00.000 | 3 | 0 | 1 | 0 | python,string,algorithm,search,data-structures | 41,664,342 | 6 | false | 0 | 0 | If the strings are sorted then a binary search is reasonable. As a speedup, you could maintain a dictionary of all possible bigrams ("aa", "ab", etc.) where the corresponding values are the first and last index starting with that bigram (if any do) and so in O(1) time zero in on a much smaller sublist that contains the strings that you are looking for. Once you find a match, do a linear search to the right and left to get all other matches. | 4 | 11 | 0 | I have a list of 500 mil strings. The strings are alphanumeric, ASCII characters, of varying size (usually from 2-30 characters). Also, they're single words (or a combination of words without spaces like 'helloiamastring').
What I need is a fast way to check against a target, say 'hi'. The result should be all strings from the 500mil list which start with 'hi' (for eg. 'hithere', 'hihowareyou' etc.). This needs to be fast because there will be a new query each time the user types something, so if he types "hi", all strings starting with "hi" from the 500 mil list will be shown, if he types "hey", all strings starting with "hey" will show etc.
I've tried with the Tries algo, but the memory footprint to store 300 mil strings is just huge. It should require me 100GB+ ram for that. And I'm pretty sure the list will grow up to a billion.
What is a fast algorithm for this use case?
P.S. In case there's no fast option, the best alternative would be to limit people to enter at least, say, 4 characters, before results show up. Is there a fast way to retrieve the results then? | Prefix search against half a billion strings | 0.099668 | 0 | 0 | 2,126 |
41,664,257 | 2017-01-15T17:43:00.000 | 0 | 0 | 1 | 0 | python,string,algorithm,search,data-structures | 41,800,209 | 6 | false | 0 | 0 | In this hypothetical, where the strings being indexed are not associated with any other information (e.g. other columns in the same row), there is relatively little difference between a complete index and keeping the strings sorted in the first place (as in, some difference, but not as much as you are hoping for). In light of the growing nature of the list and the cost of updating it, perhaps the opposite approach will better accomplish the performance tradeoffs that you are looking for.
For any given character at any given location in the string, your base case is that no string exists containing that letter. For example, once 'hello' has been typed, if the next letter typed is 't', then your base case is that there is no string beginning 'hellot'. There is a finite number of characters that could follow 'hello' at location 5 (say, 26). You need 26 fixed-length spaces in which to store information about characters that follow 'hello' at location 5. Each space either says zero if there is no string in which, e.g., 't' follows 'hello', or contains a number of data-storage addresses by which to advance to find the list of characters for which one or more strings involve that character following 'hellot' at location 6 (or use absolute data-storage addresses, although only relative addressess allow the algorithm I propose to support an infinite number of strings of infinite length without any modification to allow for larger pointers as the list grows).
The algorithm can then move forward through this data stored on disk, building a tree of string-beginnings in memory as it goes, and avoiding delays caused by random-access reads. For an in-memory index, simply store the part of the tree closest to the root in memory. After the user has typed 'hello' and the algorithm has tracked that information about one or more strings beginning 'hellot' exists at data-storage address X, the algorithm finds one of two types of lists at location X. Either it is another sequence of, e.g., 26 fixed-length spaces with information about characters following 'hellot' at location 6, or it is a pre-allocated block of space listing all post-fixes that follow 'hellot', depending on how many such post-fixes exist. Once there are enough post-fixes that using some traditional search and/or sort algorithm to both update and search the post-fix list fails to provide the performance benefits that you desire, it gets divided up and replaced with a sequence of, e.g., 26 fixed-length spaces.
This involves pre-allocating a relatively substantial amount of disk-storage upfront, with the tradeoff that your tree can be maintained in sorted form without needing to move anything around for most updates, and your searches can be peformed in full in a single sequential read. It also provides more flexibility and probably requires less storage space than a solution based on storing the strings themselves as fixed-length strings. | 4 | 11 | 0 | I have a list of 500 mil strings. The strings are alphanumeric, ASCII characters, of varying size (usually from 2-30 characters). Also, they're single words (or a combination of words without spaces like 'helloiamastring').
What I need is a fast way to check against a target, say 'hi'. The result should be all strings from the 500mil list which start with 'hi' (for eg. 'hithere', 'hihowareyou' etc.). This needs to be fast because there will be a new query each time the user types something, so if he types "hi", all strings starting with "hi" from the 500 mil list will be shown, if he types "hey", all strings starting with "hey" will show etc.
I've tried with the Tries algo, but the memory footprint to store 300 mil strings is just huge. It should require me 100GB+ ram for that. And I'm pretty sure the list will grow up to a billion.
What is a fast algorithm for this use case?
P.S. In case there's no fast option, the best alternative would be to limit people to enter at least, say, 4 characters, before results show up. Is there a fast way to retrieve the results then? | Prefix search against half a billion strings | 0 | 0 | 0 | 2,126 |
41,664,257 | 2017-01-15T17:43:00.000 | 0 | 0 | 1 | 0 | python,string,algorithm,search,data-structures | 41,790,945 | 6 | false | 0 | 0 | If you doesn't want to use some database, you should create some data related routines pre-existing in all database engines:
Doesn't try to load all data in memory.
Use fixed length for all string. It increase storage memory consumption but significantly decrease seeking time (i-th string can be found at position L*i bytes in file, where L - fixed length). Create additional mechanism to work with extremely long strings: store it in different place and use special pointers.
Sort all of strings. You can use merge sort to do it without load all strings in memory in one time.
Create indexes (address of first line starts with 'a','b',... ) also indexes can be created for 2-grams, 3-grams, etc. Indexes can be placed in memory to increase search speed.
Use advanced strategies to avoid full indexes regeneration on data update: split a data to a number of files by first letters and update only affected indexes, create an empty spaces in data to decrease affect of read-modify-write procedures, create a cache for a new lines before they will be added to main storage and search in this cache.
Use query cache to fast processing a popular requests. | 4 | 11 | 0 | I have a list of 500 mil strings. The strings are alphanumeric, ASCII characters, of varying size (usually from 2-30 characters). Also, they're single words (or a combination of words without spaces like 'helloiamastring').
What I need is a fast way to check against a target, say 'hi'. The result should be all strings from the 500mil list which start with 'hi' (for eg. 'hithere', 'hihowareyou' etc.). This needs to be fast because there will be a new query each time the user types something, so if he types "hi", all strings starting with "hi" from the 500 mil list will be shown, if he types "hey", all strings starting with "hey" will show etc.
I've tried with the Tries algo, but the memory footprint to store 300 mil strings is just huge. It should require me 100GB+ ram for that. And I'm pretty sure the list will grow up to a billion.
What is a fast algorithm for this use case?
P.S. In case there's no fast option, the best alternative would be to limit people to enter at least, say, 4 characters, before results show up. Is there a fast way to retrieve the results then? | Prefix search against half a billion strings | 0 | 0 | 0 | 2,126 |
41,664,257 | 2017-01-15T17:43:00.000 | 0 | 0 | 1 | 0 | python,string,algorithm,search,data-structures | 41,666,468 | 6 | false | 0 | 0 | If you want to force the user to digit at least 4 letters, for example, you can keep a key-value map, memory or disk, where the keys are all combinations of 4 letters (they are not too many if it is case insensitive, otherwise you can limit to three), and the values are list of positions of all strings that begin with the combination.
After the user has typed the three (or four) letters you have at once all the possible strings. From this point on you just loop on this subset.
On average this subset is small enough, i.e. 500M divided by 26^4...just as example. Actually bigger because probably not all sets of 4 letters can be prefix for your strings.
Forgot to say: when you add a new string to the big list, you also update the list of indexes corresponding to the key in the map. | 4 | 11 | 0 | I have a list of 500 mil strings. The strings are alphanumeric, ASCII characters, of varying size (usually from 2-30 characters). Also, they're single words (or a combination of words without spaces like 'helloiamastring').
What I need is a fast way to check against a target, say 'hi'. The result should be all strings from the 500mil list which start with 'hi' (for eg. 'hithere', 'hihowareyou' etc.). This needs to be fast because there will be a new query each time the user types something, so if he types "hi", all strings starting with "hi" from the 500 mil list will be shown, if he types "hey", all strings starting with "hey" will show etc.
I've tried with the Tries algo, but the memory footprint to store 300 mil strings is just huge. It should require me 100GB+ ram for that. And I'm pretty sure the list will grow up to a billion.
What is a fast algorithm for this use case?
P.S. In case there's no fast option, the best alternative would be to limit people to enter at least, say, 4 characters, before results show up. Is there a fast way to retrieve the results then? | Prefix search against half a billion strings | 0 | 0 | 0 | 2,126 |
41,664,810 | 2017-01-15T18:38:00.000 | 73 | 1 | 0 | 0 | visual-studio-code,python,telegram,telegram-bot,python-telegram-bot | 50,736,131 | 5 | false | 0 | 0 | Post one message from User to the Bot.
Open https://api.telegram.org/bot<Bot_token>/getUpdates page.
Find this message and navigate to the result->message->chat->id key.
Use this ID as the [chat_id] parameter to send personal messages to the User. | 2 | 39 | 0 | I am using the telepot python library, I know that you can send a message when you have someone's UserID(Which is a number).
I wanna know if it is possible to send a message to someone without having their UserID but only with their username(The one which starts with '@'), Also if there is a way to convert a username to a UserID. | How can I send a message to someone with my telegram bot using their Username | 1 | 0 | 1 | 131,355 |
41,664,810 | 2017-01-15T18:38:00.000 | 14 | 1 | 0 | 0 | visual-studio-code,python,telegram,telegram-bot,python-telegram-bot | 42,990,824 | 5 | false | 0 | 0 | It is only possible to send messages to users whom have already used /start on your bot. When they start your bot, you can find update.message.from.user_id straight from the message they sent /start with, and you can find update.message.from.username using the same method.
In order to send a message to "@Username", you will need them to start your bot, and then store the username with the user_id. Then, you can input the username to find the correct user_id each time you want to send them a message. | 2 | 39 | 0 | I am using the telepot python library, I know that you can send a message when you have someone's UserID(Which is a number).
I wanna know if it is possible to send a message to someone without having their UserID but only with their username(The one which starts with '@'), Also if there is a way to convert a username to a UserID. | How can I send a message to someone with my telegram bot using their Username | 1 | 0 | 1 | 131,355 |
41,665,980 | 2017-01-15T20:32:00.000 | 2 | 0 | 1 | 0 | python | 41,666,000 | 3 | false | 0 | 0 | The or operator short-circuits when the first value is truthy (i.e. evaluates to True). When that happens, that first value is returned.
So, True or 5 short-circuits on the True, so it returns True. 5 or True short-circuits on the 5 (because 5 is truthy, which is nonzero for integers), so it returns 5. | 1 | 0 | 0 | I'm a beginner and this is a relatively simple question but I'm having trouble trying to figure it out. When you type "True or 5" into python, it returns True, and when you type "5 or True" it returns 5. Why is this? Why don't they return the same answer? Thanks! | Python: True or 5 vs 5 or True | 0.132549 | 0 | 0 | 412 |
41,666,809 | 2017-01-15T22:01:00.000 | 2 | 0 | 0 | 0 | python,video,video-streaming,httprequest,buffering | 41,672,032 | 1 | true | 0 | 0 | Before playing an MP4 file the client (e.g. browser) needs to read the header part of the file. An MP4 is broken into 'Atoms' and the Moov atom is the header or index atom for the file.
For MP4 files that will be streamed, a common optimisation is to move this Moov atom to the front of the file.
This allows the client to get the moov at the start and it will then have the information it needs to allow you jump to the offset you want in your case.
If you don't have the moov atom at the start the client needs to either download the whole file, or if it is a bit more sophisticated, jump around the file with range requests until it finds it. | 1 | 1 | 0 | I have been breaking my head for the pass 2 weeks, and I still can't figure it out.
I'm trying to build a Server-Client based streaming player on Python (Ironpython for the wpf GUI) that streams video files. My problem is when the client requests to seek on a part that he did not load yet. When I try to send him just the middle of the .mp4 file, he cant seem to be able to play it.
Now I know such thing exists because every online player has it, and it uses the HTTP 206 Partial Content request, where the client just requests the byte range that he desires and the server sends it to him.
My question is - how is the client able to play the video with a gap in bytes in his .mp4 file - how can he start watching for the middle of the file? When i seem to try it the player just wont open the file.
And more importantly: how can I implement this on my Server-Client program to enable free seeking?
I really tried to look for a simple explanation for this all over the internet...
Please explain it thoroughly and in simple terms for a novice such as me, I would highly appreciate it.
Thanks in advance. | How does HTTP 206 Partial Content Request works | 1.2 | 0 | 1 | 1,969 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.