Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
44,602,353 | 2017-06-17T08:18:00.000 | 2 | 0 | 1 | 0 | python,python-3.x | 68,982,878 | 3 | false | 0 | 0 | To comment a block of Python code on Repl on a windows machine:
Select block of lines to comment
Type "Ctrl + K" followed by "Ctrl + C"
To uncomment use: "Ctrl + K" followed by "Ctrl + U" | 2 | 0 | 0 | I searched for a shortcut command that would comment/uncomment a block for repl.it.
I'm working on a python 3 project and I often need to comment a big section of my code.
Is there a shortcut command for that? | Repl.it Python 3 Short cut Comment/Uncomment a block | 0.132549 | 0 | 0 | 11,174 |
44,602,353 | 2017-06-17T08:18:00.000 | 1 | 0 | 1 | 0 | python,python-3.x | 65,972,062 | 3 | false | 0 | 0 | From the command palette: ( Ctrl + Shift + P )
Add line comment: Ctrl + K / Ctrl +C
Remove line comment: Ctrl + K / Ctrl + U | 2 | 0 | 0 | I searched for a shortcut command that would comment/uncomment a block for repl.it.
I'm working on a python 3 project and I often need to comment a big section of my code.
Is there a shortcut command for that? | Repl.it Python 3 Short cut Comment/Uncomment a block | 0.066568 | 0 | 0 | 11,174 |
44,602,603 | 2017-06-17T08:52:00.000 | 0 | 0 | 1 | 0 | python,atom-editor | 50,898,644 | 8 | false | 0 | 0 | Same for me. I followed the previous answer but could not find autocomplete-plus (June 2018). I installed autocomplete and now both packages work just fine. | 6 | 7 | 0 | I have just installed the Atom IDE and the package autocomplete-python (on Windows). But the package is not working. Do I have to make any setting changes? (I have disabled autocomplete-plus and autocomplete-snippets).
Do I need to separately install Jedi? | Atom IDE autocomplete-python not working | 0 | 0 | 0 | 22,110 |
44,602,603 | 2017-06-17T08:52:00.000 | 0 | 0 | 1 | 0 | python,atom-editor | 51,572,729 | 8 | false | 0 | 0 | For me, enabling autocomplete-plus didn't help.
It worked when I changed the Python version that I am using from Python 3.7
to Python 3.6.6. | 6 | 7 | 0 | I have just installed the Atom IDE and the package autocomplete-python (on Windows). But the package is not working. Do I have to make any setting changes? (I have disabled autocomplete-plus and autocomplete-snippets).
Do I need to separately install Jedi? | Atom IDE autocomplete-python not working | 0 | 0 | 0 | 22,110 |
44,602,603 | 2017-06-17T08:52:00.000 | 1 | 0 | 1 | 0 | python,atom-editor | 54,603,101 | 8 | false | 0 | 0 | Late to the party, but I had the same issue and resolved it by adding a path to my site-packages for one of my virtual environments in the settings.
On the toolbar, go to File-> Settings -> packages.
Find your autocomplete-python package.
Go to the settings of the autocomplete-python package.
Scroll down to "Extra Paths For Packages".
Copy and paste a path location to your site packages.
e.g:
C:\Users\my_username\Miniconda3\envs\my_env_name\Lib\site-packages
Celebrate :) | 6 | 7 | 0 | I have just installed the Atom IDE and the package autocomplete-python (on Windows). But the package is not working. Do I have to make any setting changes? (I have disabled autocomplete-plus and autocomplete-snippets).
Do I need to separately install Jedi? | Atom IDE autocomplete-python not working | 0.024995 | 0 | 0 | 22,110 |
44,602,603 | 2017-06-17T08:52:00.000 | 0 | 0 | 1 | 0 | python,atom-editor | 68,328,117 | 8 | false | 0 | 0 | I also had the same problem while trying to use the django-atom autocomplete snippet and after reading different articles on it here is how I go about it.
Go to the directory of the installed autocomplete package, in this case here:
C:\Users\user.atom\packages\django-atom
Go to the snippet folder and you'll find a .cson file, right click and open it with an editor in this case here is the directory:
C:\Users\user.atom\packages\django-atom\snippets
Copy everything inside the file and go back to the snippets.cson file in .atom parent directory, in this case here:
C:\Users\user.atom
Right click and open the snippets file with an editor, scroll down and paste everything you copied earlier into the file then save.
Bro, now you can enjoy your beautiful snippets. | 6 | 7 | 0 | I have just installed the Atom IDE and the package autocomplete-python (on Windows). But the package is not working. Do I have to make any setting changes? (I have disabled autocomplete-plus and autocomplete-snippets).
Do I need to separately install Jedi? | Atom IDE autocomplete-python not working | 0 | 0 | 0 | 22,110 |
44,602,603 | 2017-06-17T08:52:00.000 | 20 | 0 | 1 | 0 | python,atom-editor | 52,023,811 | 8 | false | 0 | 0 | If autocomplete-python in Atom not working with Python 3.7
In windows, go to:
C:\Users\username\.atom\packages\autocomplete-python\lib\jedi\parser
Or in linux:
cd ~/.atom/packages/autocomplete-python/lib/jedi/parser
Duplicate file named "grammar3.6.txt" and change it to "grammar3.7.txt"
It's worked for me with python 3.7! | 6 | 7 | 0 | I have just installed the Atom IDE and the package autocomplete-python (on Windows). But the package is not working. Do I have to make any setting changes? (I have disabled autocomplete-plus and autocomplete-snippets).
Do I need to separately install Jedi? | Atom IDE autocomplete-python not working | 1 | 0 | 0 | 22,110 |
44,602,603 | 2017-06-17T08:52:00.000 | 13 | 0 | 1 | 0 | python,atom-editor | 44,666,764 | 8 | false | 0 | 0 | It worked when I enabled autocomplete-plus. It seems autocomplete-plus is required for autocomplete-python to work. (I had initially followed a youtube video in which autocomplete-plus and -snippets were disabled and then autocomplete-python was installed.) | 6 | 7 | 0 | I have just installed the Atom IDE and the package autocomplete-python (on Windows). But the package is not working. Do I have to make any setting changes? (I have disabled autocomplete-plus and autocomplete-snippets).
Do I need to separately install Jedi? | Atom IDE autocomplete-python not working | 1 | 0 | 0 | 22,110 |
44,604,689 | 2017-06-17T12:31:00.000 | 0 | 0 | 1 | 0 | python,windows-10,pycharm,python-3.5,nvidia | 48,407,706 | 2 | false | 0 | 0 | Let me talk for a case with Android emulator- gemu-system_86_64.
Have experienced the same error that “Process finished with exit code -1073740791 (0xC0000409)”
Have played with bios settings, path to java, memory and disk speed enhansments, rolling back old original card drivers, Nvidia supports - no result.
Then I returned all default settings to above ones and:
Right clicked on Desktop and selected 'Nvidia control panel' -> 3d settings -> Manage 3d settings. You can see there two tab: global settings.
Put all of them as clamp, off or '1'.
And in 'programm settings' tab navigated to emulator app and also disabled all what s possible. Now all work just fine.
If any in future you can change them for a specific application.
Perhaps this will help somebody. | 2 | 10 | 0 | I have a script testing on Pycharm, script working fine around 3-4 mins, then it says "Python stopped working" and the script is stop running. On the Pycharm output segment, it says Process finished with exit code -1073740791 (0xC0000409).
Is this a bug or something wrong with my computer/Pycharm? | Pycharm error "Process finished with exit code -1073740791 (0xC0000409)" | 0 | 0 | 0 | 11,087 |
44,604,689 | 2017-06-17T12:31:00.000 | 1 | 0 | 1 | 0 | python,windows-10,pycharm,python-3.5,nvidia | 51,061,875 | 2 | false | 0 | 0 | fixed the same problem by upgrading driver for nvidia to 398.36. i was having problems running code using pyqt5 and matplotlib through pycharm but the driver update solved it. was not expecting this to be the issue but it fixed it! | 2 | 10 | 0 | I have a script testing on Pycharm, script working fine around 3-4 mins, then it says "Python stopped working" and the script is stop running. On the Pycharm output segment, it says Process finished with exit code -1073740791 (0xC0000409).
Is this a bug or something wrong with my computer/Pycharm? | Pycharm error "Process finished with exit code -1073740791 (0xC0000409)" | 0.099668 | 0 | 0 | 11,087 |
44,605,000 | 2017-06-17T13:08:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,django-apps | 44,605,316 | 2 | false | 1 | 0 | It's all right you use any HTML in django textfield, for example, if you are using TinyMCE, this kind of list can be used very easily | 1 | 0 | 0 | Is it good to put html code in a Django TextField that will be used in a blog app? | Is it ok to put html code in Django TextField? | 0 | 0 | 0 | 1,043 |
44,606,342 | 2017-06-17T15:37:00.000 | 1 | 0 | 0 | 0 | python,postgresql,python-3.x,flask | 44,608,243 | 1 | true | 1 | 0 | These requirements are more or less straightforward to follow. Given that you will have a persistent database that can share the state of each file with multiple sessions - and even multiple deploys - of your system - and that is more or less a given with Python + PostgreSQL.
I'd suggest you to create a Python class with a few fields yuu can use for the whole process, and use an ORM like SQLAlchemy or Django's to bind those to a database. The fields you will need are more or less: filename, filpath, timestamp, check_status - and some extra like "locked_for_checking", and "checker" (which might be a foreignkey to a Users collection). On presenting a file as a sugestion for a given user, you set the "locked_for_checking" flag - and for the overall listing, yu create a list that excçuds files "checked" or "locked_for_checking". (and sort the files by timestamp/size or other metadata that attends your requirements).
You will need some logic to "unlock for checking" if the first user does not complete the checking in a given time frame, but that is it. | 1 | 0 | 0 | Firstly, this question isn't a request for code suggestions- it's more of a question about a general approach others would take for a given problem.
I've been given the task of writing a web application in python to allow users to check the content of media files held on a shared server. There will also likely be a postgres database from which records for each file will be gathered.
I want the web app to:
1) Suggest the next file to check (from files that have yet to be checked) and have a link to the next unchecked file once the result of the previous check have been submitted.
2) Prevent the app from suggesting the same file to multiple users simultaneously.
If it was just one user checking the files it would be easier, but I'm having trouble conceptualising how i'm going to achieve the two points above with multiple simultaneous users.
As I say, this isn't a code request i'm just just interested in what approach/tools others feel would be best suited to this type of project.
If there are any python libraries that could be useful i'd be interested to hear any recommendations.
Thanks | Python web app ideas- incremental/unique file suggestions for multiple users | 1.2 | 1 | 0 | 49 |
44,608,785 | 2017-06-17T19:48:00.000 | 1 | 0 | 0 | 0 | python,hive,amazon-emr,amazon-data-pipeline | 44,612,782 | 1 | false | 0 | 0 | I would suggest going with the data pipeline into S3 approach. And then have a script to read from S3 and process your records. You can schedule this to run on regular intervals to backup all your data. I don't think that any solution that does a full scan will offer you a faster way, because it is always limited by read throughput.
Another possible approach is to use dynamoDB stream and lambdas to maintain second table in real time. Still you will first need to process existing 15 GB once using approach above, and then switch to lambdas for keeping them in sync | 1 | 1 | 0 | I have a table of size 15 GB in DynamoDB. Now I need to transfer some data based on timestamps ( which is in db) to another DynamoDB. What would be the most efficient option here?
a) Transfer to S3,process with pandas or someway and put in the other table (data is huge. I feel this might take a huge time)
b) Through DataPipeLine (read a lot but don't think we can put queries over there)
c) Through EMR and Hive (this seems to be the best option but is it possible to do everything though a python script? Would I need to create an EMR Cluster and use it or create and terminate every time? How can EMR be used efficiently and cheaply as well?) | Data transfer from DynamoDB table to another DynamoDB table | 0.197375 | 1 | 0 | 831 |
44,610,150 | 2017-06-17T22:54:00.000 | 0 | 0 | 1 | 1 | python,windows,db2 | 47,904,208 | 4 | false | 0 | 0 | ibm_db does not support Python 3.6 you need to downgrade to Python 3.5.4. That is how i did it. | 2 | 1 | 0 | I downloaded Python 3.6 from Python's website (from the download page for Windows) and it seems only the interpreter is available. I don't see anything else (Standard Library or something) in my system. Is it included in the interpreter and hidden or something?
I tried to install ibm_db 2.0.7 as an extension of Python DB API, but the installation instructions seem too old. The paths defined don't exist in my Win10.
By the way, I installed the latest Platform SDK as instructed (which is the predecessor of Windows 10 SDK, so I had to to install Windows 10 SDK instead). I also installed .NET SDK V1.1, which is told to include Visual C++ 2003 (Visual C++ 2003 is not available today on its own). I considered to install VS2017, but because it was too big (12.some GB) I passed on that option.
I am stuck and can't proceed for I don't know which changes happened from the point the installation instructions have been written and what else I need to do. How can I install python with the ibm_db package on Windows 10? | How to install Python libraries/packages and ibm_db 2.0.7 installation instructions? | 0 | 0 | 0 | 4,969 |
44,610,150 | 2017-06-17T22:54:00.000 | 0 | 0 | 1 | 1 | python,windows,db2 | 44,611,663 | 4 | false | 0 | 0 | for python libraries you can always use pip, which is a python package installer.
if you are using python 3.6 then you already have pip. just make sure you're using pip from the main command line not the python terminal | 2 | 1 | 0 | I downloaded Python 3.6 from Python's website (from the download page for Windows) and it seems only the interpreter is available. I don't see anything else (Standard Library or something) in my system. Is it included in the interpreter and hidden or something?
I tried to install ibm_db 2.0.7 as an extension of Python DB API, but the installation instructions seem too old. The paths defined don't exist in my Win10.
By the way, I installed the latest Platform SDK as instructed (which is the predecessor of Windows 10 SDK, so I had to to install Windows 10 SDK instead). I also installed .NET SDK V1.1, which is told to include Visual C++ 2003 (Visual C++ 2003 is not available today on its own). I considered to install VS2017, but because it was too big (12.some GB) I passed on that option.
I am stuck and can't proceed for I don't know which changes happened from the point the installation instructions have been written and what else I need to do. How can I install python with the ibm_db package on Windows 10? | How to install Python libraries/packages and ibm_db 2.0.7 installation instructions? | 0 | 0 | 0 | 4,969 |
44,611,060 | 2017-06-18T01:52:00.000 | 2 | 0 | 1 | 0 | python,string,list,python-3.x | 44,611,083 | 3 | false | 0 | 0 | Any Python class is free to define various operations however it likes. Strings happen to implement the sequence protocol (meaning that iteration and [i] item access behave the same as lists), but also implement __contains__, which is responsible for x in y checks, to look for substrings rather than just single characters.
It is common for x in y membership testing to mean "x will appear if you print all the elements of y", but there's no rule saying that that has to be the case. | 1 | 0 | 0 | My book says -
Strings and lists are actually similar, if you consider a string to be a “list” of single text characters.
Suppose that I have a string namely
name=Zophie.
Now this string should have some resemblance with a list. So I type in another code that would tell me what should the items of that list be. The code goes like -
for i in name:
print(‘* * * ‘ + i + ‘ * * *')
The output is:
* * * Z * * *
* * * o * * *
* * * p * * *
* * * h * * *
* * * i * * *
* * * e * * *
This clearly shows that the list items of name are Z,o,p,h,i,e.
Now if I try to check wether the list has an item ’Zop' by using:
Zop in name
It returns True! That is, Python says that Zophie contains an item namely ’Zop’ but when I tried to list all the items using the for command, Zop didn’t show up.
What’s happening here? | Why do any of the ordered characters in a string appear to be a part of a list which is equivalent to that string? | 0.132549 | 0 | 0 | 36 |
44,611,373 | 2017-06-18T03:05:00.000 | 0 | 0 | 1 | 0 | python,nonetype | 44,611,434 | 1 | false | 0 | 0 | In Python, True or -1 > None evaluates as True or (-1 > None), which is always True, regardless of the expression | 1 | 0 | 0 | In Python, True or -1 > None returns True but True > None and -1 > None returns False???
Why is this? | "True or -1 > None" returns True | 0 | 0 | 0 | 82 |
44,614,977 | 2017-06-18T12:29:00.000 | 1 | 0 | 0 | 0 | python,lua,torch,pytorch | 44,638,348 | 1 | true | 0 | 0 | No, Torch7 use static computational graphs, as in Tensorflow. It is one of the major differences between PyTorch and Torch7. | 1 | 0 | 1 | Pytorch have Dynamic Neural Networks (defined-by-run) as opposed to Tensorflow which have to compile the computation graph before run.
I see that both Torch7 and PyTorch depend on TH, THC, THNN, THCUNN (C library). Does Torch7 have Dynamic Neural Networks (defined-by-run) feature ? | Is Torch7 defined-by-run like Pytorch? | 1.2 | 0 | 0 | 79 |
44,615,007 | 2017-06-18T12:33:00.000 | 0 | 0 | 1 | 1 | python,django,macos,terminal,installation | 44,615,048 | 4 | false | 1 | 0 | . ~/.bashrc
alias python='/usr/bin/python3.4' | 4 | 0 | 0 | I have looked for Python 2.7.12 in my apps and docs but I can't find it...
I'm using a macbook pro.
I can see Python 3.6 in my applications so I don't know why the terminal isn't referring to this one.
I want to get started learning django but I don't think it will be possible if I don't use Python 3.5 or later.
is there a way to tell the terminal to use 3.6 instead? | Downloaded Python 3.6 but terminal is still saying I'm using python 2.7.12 | 0 | 0 | 0 | 2,694 |
44,615,007 | 2017-06-18T12:33:00.000 | 1 | 0 | 1 | 1 | python,django,macos,terminal,installation | 44,615,199 | 4 | false | 1 | 0 | Open the text editor like nano , vim or gedit and open the .bashrc file ,
nano ~/.bashrc
and create the bash alias,
To do so add the following line into the .bashrc file:
alias python='/usr/bin/python3.6'
Save the file and re-open the terminal.
Edit:
Similarly, if you don't want to create the direct alias.
As @exprator suggested above you can also use python command for python 2 and python3 to use Python 3 version | 4 | 0 | 0 | I have looked for Python 2.7.12 in my apps and docs but I can't find it...
I'm using a macbook pro.
I can see Python 3.6 in my applications so I don't know why the terminal isn't referring to this one.
I want to get started learning django but I don't think it will be possible if I don't use Python 3.5 or later.
is there a way to tell the terminal to use 3.6 instead? | Downloaded Python 3.6 but terminal is still saying I'm using python 2.7.12 | 0.049958 | 0 | 0 | 2,694 |
44,615,007 | 2017-06-18T12:33:00.000 | 0 | 0 | 1 | 1 | python,django,macos,terminal,installation | 44,618,123 | 4 | false | 1 | 0 | By the way, you shouldn't use the default environment to develop. Instead, you have to use Virtualenv | 4 | 0 | 0 | I have looked for Python 2.7.12 in my apps and docs but I can't find it...
I'm using a macbook pro.
I can see Python 3.6 in my applications so I don't know why the terminal isn't referring to this one.
I want to get started learning django but I don't think it will be possible if I don't use Python 3.5 or later.
is there a way to tell the terminal to use 3.6 instead? | Downloaded Python 3.6 but terminal is still saying I'm using python 2.7.12 | 0 | 0 | 0 | 2,694 |
44,615,007 | 2017-06-18T12:33:00.000 | 1 | 0 | 1 | 1 | python,django,macos,terminal,installation | 44,615,243 | 4 | false | 1 | 0 | Just use python in terminal for python 2.7 and type python3 to use python 3.6 when you need | 4 | 0 | 0 | I have looked for Python 2.7.12 in my apps and docs but I can't find it...
I'm using a macbook pro.
I can see Python 3.6 in my applications so I don't know why the terminal isn't referring to this one.
I want to get started learning django but I don't think it will be possible if I don't use Python 3.5 or later.
is there a way to tell the terminal to use 3.6 instead? | Downloaded Python 3.6 but terminal is still saying I'm using python 2.7.12 | 0.049958 | 0 | 0 | 2,694 |
44,616,799 | 2017-06-18T15:52:00.000 | 0 | 0 | 0 | 0 | python,cytoscape | 45,496,273 | 1 | true | 1 | 0 | Great question. This is within scope of this forum.
The answer is "it depends". Cytoscape apps themselves must be Java (or something that runs in JVM, though there's only documentation support for Java and the forums will give best advice for Java).
However, the Cytoscape Cyberinfrastructure (CI) allows Python-based services (e.g., the Diffusion service) called by Cytoscape apps (e.g., the Diffusion app). The service must be deployed somewhere on the web (e.g., in a Kubernetes cluster).
If you'd like help with that route, you'll find enthusiastic support ... please e-mail the cytoscape-app-dev at googlegroups.com forum directly. | 1 | 0 | 0 | I want to create a new cytoscape app for analysing protein interaction but I don't know if I can use python or just java. | can I use python for create cytoscape app? | 1.2 | 0 | 0 | 159 |
44,617,331 | 2017-06-18T16:51:00.000 | 1 | 0 | 0 | 0 | python,numpy | 44,617,764 | 2 | true | 0 | 0 | Since X is a numpy array, you can do X.shape instead of the repeated len. I expect it to show (13934, 74).
I expect Y.shape to be (13934,). It's a 1d array, which is why Y[0] is a number, numpy.int64. And since it is 1d, transpose (swapping axes) doesn't do anything. (this isn't MATLAB where everything has at least 2 dimensions.)
It looks like you want to create an array that has shape (13934, 75). To do that you'll need to add a dimension to Y. Y[:,None] is a concise way of doing that. The shape of that is (13934,1), which will concatenate with X. If that None syntax is puzzling, try, Y.reshape(-1,1) (or reshape(13934,1)). | 1 | 0 | 1 | I have a matrix X which has len(X) equal to 13934 and len(X[i]), for all i, equal to 74, and I have an array Y which has len(Y) equal to 13934 and len(Y[i]) equal to TypeError: object of type 'numpy.int64' has no len() for all i.
When I try np.vstack((X,Y)) or result = np.concatenate((X, Y.T), axis=1)
I get ValueError: all the input array dimensions except for the concatenation axis must match exactly
What is the problem?
When I print out Y it says array([1,...], dtype=int64) and when I print out X it says array([data..]) with no dtype. Could this be the problem?
I tried converting them both to float32 by doing X.view('float32') and this did not help. | Python vertical stack not working | 1.2 | 0 | 0 | 203 |
44,618,760 | 2017-06-18T19:31:00.000 | 1 | 0 | 1 | 0 | c#,python,data-sharing,multiple-languages | 44,618,813 | 2 | false | 0 | 0 | Any kind of IPC (InterProcess Communication) — sockets or shared memory. Any common format — plain text files or structured, JSON, e.g. Or a database. | 2 | 0 | 0 | I want to share data between programs that run locally which uses different languages, I don't know how to approach this.
For example, if I have a program that uses C# to run and another that uses python to run, and I want to share some strings between the two, how can I do it?
I thought about using sockets for this but I'm not sure that this is the right approach, I also thought about saving the data in a file, then reading the file from the other program, but, it might even be worse than using sockets.
Note that I need to share strings almost a thousand times between the programs | How to share data between programs that use different languages to run | 0.099668 | 0 | 0 | 406 |
44,618,760 | 2017-06-18T19:31:00.000 | 3 | 0 | 1 | 0 | c#,python,data-sharing,multiple-languages | 44,618,831 | 2 | true | 0 | 0 | There are a lot of ways to do so, I would recommend you reading more about IPC (Inter Process Communication) - sockets, pipes, named pipes, shared memory and etc...
Each method has it's own advantages, therefore, you need to think about what you're trying to achieve and choose the method that fits you the best. | 2 | 0 | 0 | I want to share data between programs that run locally which uses different languages, I don't know how to approach this.
For example, if I have a program that uses C# to run and another that uses python to run, and I want to share some strings between the two, how can I do it?
I thought about using sockets for this but I'm not sure that this is the right approach, I also thought about saving the data in a file, then reading the file from the other program, but, it might even be worse than using sockets.
Note that I need to share strings almost a thousand times between the programs | How to share data between programs that use different languages to run | 1.2 | 0 | 0 | 406 |
44,618,843 | 2017-06-18T19:41:00.000 | 0 | 0 | 0 | 0 | python-3.x,scapy,packet-sniffers,sniffing | 44,703,883 | 2 | false | 0 | 0 | Maybe you can get your device MAC address and filter any packets with that address as source address. | 1 | 1 | 0 | how do I sniff packets that are only outbound packets?
I tried to sniff only destination port but it doesn't succeed at all | Python(scapy): how to sniff packets that are only outboun packets | 0 | 0 | 1 | 736 |
44,619,977 | 2017-06-18T22:04:00.000 | 2 | 0 | 0 | 0 | python,django,django-models | 44,621,292 | 3 | false | 1 | 0 | If you just go ahead and change the model name in your models.py file, the "makemigrations" command is usually smart enough to pick it up. It will ask you if you changed the model name and create a migration to rename the table accordingly if you confirm. | 1 | 3 | 0 | How do you change the model class name in Django without losing data? Does anybody knows? Thank you very much to all for the help in advance. | How to change model class name without losing data? | 0.132549 | 0 | 0 | 1,361 |
44,620,403 | 2017-06-18T23:14:00.000 | 1 | 0 | 0 | 0 | python,tensorflow,keras | 44,623,114 | 2 | false | 0 | 0 | Keras pop() removes the last (aka top) layer, not the bottom one.
I suggest you use model.summary() to print out the list of layers and than subsequently use pop() until only the necessary layers are left. | 1 | 2 | 1 | In Keras there is a feature called pop() that lets you remove the bottom layer of a model. Is there any way to remove the top layer of a model?
I have a fully saved pre-trained Variational Autoencoder and am trying to only load the decoder (the bottom four layers).
I am using Keras with a Tensorflow backend. | Keras: Is there any way to "pop()" the top layers? | 0.099668 | 0 | 0 | 1,067 |
44,621,236 | 2017-06-19T01:52:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,pygame,trigonometry | 44,689,124 | 1 | false | 0 | 1 | Ok guys, I am sorry, it turns out that I was giving the math.cosfunction degrees as opposed to radians. I did try giving it radians earlier, however for some reason it didn't work. | 1 | 0 | 0 | Apologies for the long title. What I want to do is to be able to spawn a bullet to the left or right of the centre of an object, the player, which can turn in a full 360 degrees to the left and right. At the moment I am using x = hypotenuse * cos(player angle) and y = hypotenuse * sin(player angle). This works very strangely with it working while pointing upwards, with the bullet spawning on the right as intended, but as soon as it is angled downwards it starts spawning on the left.
I have some teacher and we have done some playing around, however they have not been able to help too much as of yet. | Making an object spawn to the left or right relative to another angled object | 0 | 0 | 0 | 66 |
44,621,606 | 2017-06-19T02:52:00.000 | 1 | 0 | 0 | 0 | python,postgresql,amazon-web-services | 44,621,822 | 1 | true | 0 | 0 | I assume this is for RDS. There is no direct way via the AWS API. You could potentially get it from CloudWatch but you'd be better off connecting to the database and getting the count that way by querying pg_stat_activity. | 1 | 0 | 0 | I connect to a postgres database hosted on AWS. Is there a way to find out the number of open connections to that database using python API? | Finding number of open connections to postgres database | 1.2 | 1 | 0 | 60 |
44,622,189 | 2017-06-19T04:16:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-3.x | 48,988,894 | 4 | false | 0 | 0 | If you are using Ubuntu 17.10 python 3 is already installed.
You can invoke it by typing python3.
If you already installed python 2, by typing python --version it shows python 2 version
and by typing python3 --version it shows python 3 version.
so we can use both versions | 1 | 7 | 0 | Python Newbie here. I just bought a new Mac that came with Python 2.7. I'm using the older version of Python for a class so I need to keep it. I want to install the latest version of Python, 3.6, side by side the older version. The instruction I found online were either outdated or confusing. Can anyone point me in the right direction? | How to Install Python 3.6 along with Python 2.7? | 0 | 0 | 0 | 7,897 |
44,622,280 | 2017-06-19T04:29:00.000 | 1 | 0 | 0 | 0 | python,xbmc,kodi | 44,646,296 | 1 | true | 0 | 0 | You can use window properties for that. Window 10000 is one of the main windows so it will always exist.
Set it
xbmcgui.Window(10000).setProperty('myProperty', 'myValue')
Read it
xbmcgui.Window(10000).getProperty('myProperty') | 1 | 0 | 0 | I'm writing Multiple Add-ons for a personal build of KODI.
What I am trying to achieve is:
Service(Add-on A) will authenticate the box using its MAC Address.
Service(Add-on A) will save a token(T1).
Service(Add-on B) will use T1 and load movies if (T1 != None)
BUT
xbmcplugin.setSetting( "token" ) AND xbmcplugin.getSetting( "token" ) saves the value in the context of Add-on where it was called.
HOW to achieve Saving Global Values in KODI with Python | Save settings value accessible globally in KODI | 1.2 | 0 | 1 | 570 |
44,622,868 | 2017-06-19T05:31:00.000 | 0 | 0 | 0 | 0 | python,django,python-idle | 44,623,832 | 2 | false | 1 | 0 | Django is a web framework. you can not compile it, you can just write the code and then the browser will show the page you designed.
you can use Atom or Sublimetext for this purpose.test | 2 | 0 | 0 | I know there are several softwares for Django like Pycharm,etc. But I want to know can I run a webpage created by Django from Python IDLE. Is there any way if there is please help me. | How to run django from python idle? | 0 | 0 | 0 | 1,144 |
44,622,868 | 2017-06-19T05:31:00.000 | -1 | 0 | 0 | 0 | python,django,python-idle | 44,622,927 | 2 | true | 1 | 0 | IDLE and pycharm are development tools. When you run the code as webpage, you shouldn't use them.
In particular -- idle is an editor for one file at a time, while pycharm is better for project development.
Any way, you can use any text editor for editing the python files. | 2 | 0 | 0 | I know there are several softwares for Django like Pycharm,etc. But I want to know can I run a webpage created by Django from Python IDLE. Is there any way if there is please help me. | How to run django from python idle? | 1.2 | 0 | 0 | 1,144 |
44,623,187 | 2017-06-19T05:58:00.000 | 1 | 0 | 0 | 0 | c#,python,zeromq | 44,684,673 | 2 | false | 0 | 0 | Oh yes, Sir!
Use the PAIR-PAIR or even the XREQ-XREP ought make it.
The best next step is to carefully read the respective Scalable Formal Communication Pattern archetypes' access-points' API documentation, so as to cross-check, that all pieces of pre-wired behavioural logic meet your Project needs and voilá, harness them in your messaging setup and tune-up the settings so as to meet you performance and latency needs.
That is this simple ( thanks to all the genuine knowhow hidden in these builtins ).
Using for years this very sort of inter-platforms integration among Python + C/MQL4 and other computing nodes, so well worth one's time to learn the powers and strengths of ZeroMQ. | 1 | 3 | 0 | The project is to build a messaging mechanism between a Python and C# program via ZeroMQ .
I want messages to be able to travel in/out from both ends at any time, which is NOT a basic request-reply model a.k.a. REQ/REP.
One way I can think of is to build a PUB/SUB model on two ports, i.e. two one way channels.
Is there any method to get a real duplex channel? | Is There any way to achieve a ZeroMQ fullduplex channel? | 0.099668 | 0 | 0 | 1,265 |
44,626,578 | 2017-06-19T09:17:00.000 | 0 | 0 | 1 | 0 | excel,windows,python-2.7 | 44,626,892 | 1 | false | 0 | 0 | I am assuming that both excel sheets have a list of words, with one word in each cell.
The best way to write this program would be something like this:
Open the first excel file, you might find it easier to open if you export it as a CSV first.
Create a Dictionary to store word and Cell Index Pairs
Iterate over each cell/word, add the word to the dictionary as the Key, with the Cell Reference as the Value.
Open the second excel file.
Iterate over each cell/word, check if the word is present in the Dictionary, if it is, you can print out the corresponding cells or store them however you want. | 1 | 0 | 0 | I want to find the same words in two different excel workbooks. I have two excel workbooks (data.xls and data1.xls). If in data.xls have the same words in the data1.xls, i want it to print the row of data1.xls that contain of the same words with data.xls. I hope u can help me. Thank you. | python- how to find same words in two different excel workbooks | 0 | 1 | 0 | 67 |
44,630,642 | 2017-06-19T12:30:00.000 | 2 | 0 | 0 | 0 | python,arrays,django | 61,437,282 | 5 | false | 1 | 0 | I don't know why nobody has suggested it, but you can always pickle things and put the result into a binary field.
The advantages of this method are that it will work with just about any database, it's efficient, and it's applicable to more than just arrays. The downside is that you can't have the database run queries on the pickled data (not easily, anyway). | 1 | 52 | 0 | I was wondering if it's possible to store an array in a Django model?
I'm asking this because I need to store an array of int (e.g [1,2,3]) in a field and then be able to search a specific array and get a match with it or by it's possible combinations.
I was thinking to store that arrays as strings in CharFields and then, when I need to search something, concatenate the values(obtained by filtering other model) with '[', ']' and ',' and then use a object filter with that generated string. The problem is that I will have to generate each possible combination and then filter them one by one until I get a match, and I believe that this might be inefficient.
So, I hope you can give me other ideas that I could try.
I'm not asking for code, necessarily, any ideas on how to achieve this will be good. | Is it possible to store an array in Django model? | 0.07983 | 0 | 0 | 103,363 |
44,631,802 | 2017-06-19T13:23:00.000 | 0 | 0 | 0 | 1 | python,python-2.7,python-3.x,csv,scheduled-tasks | 45,740,149 | 1 | false | 0 | 0 | An alternative is, to create a bat file and then execute it.
A new bat file:
-- change directory to that of python script file.
-- using full path execute python with script file as argument.
-- End the batch file.
Make that bat file executable with sufficient previlages and execute that. | 1 | 3 | 0 | So I have written a script that scrapes betting data from an odds aggregating site and outputs everything into a CSV. My script works perfectly, however, I can only run it from within Spyder. Whenever I double click the PY file a terminal opens up and closes quickly. After messing around with it for a while I also discovered that I can run it through the command line.
I have the program/script line pointing to my python3:
C:\Users\path\AppData\Local\Continuum\Anaconda3\python.exe
And my argument line points to the script
\networkname\path\moneylineScraper.py
Best case scenario I would like to be able to run this script through task scheduler, but I also cannot even run it when I double click the Py file. Any help would be appreciated! | Able to run Python3 Script through IDE, Command Line, but not by double clicking or Task Scheduler | 0 | 0 | 0 | 203 |
44,632,272 | 2017-06-19T13:43:00.000 | 0 | 0 | 0 | 0 | java,python,firebase,firebase-realtime-database | 44,632,746 | 1 | false | 1 | 0 | Storing monetary values in counts of the smallest denomination is a common practice. It ensures that you can always count full values and reduces the risk of rounding errors associated with floating point types.
Whether an integer type is enough depends on the maximum value you want to store.
Firebase Database internally supports signed 64-bit longs (-2^63 ... 2^63-1). Note that JavaScript uses 64-bit double values internally so some large integers (greater than 2^53) are not precisely representable and will lose precision. | 1 | 3 | 0 | I'm building exchange rate app and using Firebase as datastore.
On previous version 1 did store rates' data in string and convert string to BigDecimal on Android device for calculation.
Now, while I'm working on version 2 I decided to review code and make updates. I found solution to store rates' data in Integer type than convert back to Decimal/Double/Float.
Flow:
Backend Python -> dataTostore = int(currency rate value * 100)
Android Java -> rateValue = dataToStore / 100.00
Any experience with this problem ?
How to solve it OR using string-BigDecimal good enough ?
Thanks. | Firebase - store currency type value | 0 | 0 | 0 | 1,268 |
44,632,365 | 2017-06-19T13:47:00.000 | 1 | 0 | 0 | 0 | python,arangodb | 44,649,139 | 1 | false | 1 | 0 | If anyone gets the same error anytime in life, it was just a temporary error due to server overload. | 1 | 1 | 0 | I'm not able anymore to change my database on arangodb.
If I try to create a collection I get the error:
Collection error: cannot create collection: invalid database directory
If I try to delete a collection I get the error:
Couldn't delete collection.
Besides that some of the collections are now corrupted.
I've been working with this db for 2 months and I'm only getting these errors now.
Thanks. | Python ArangoDB | 0.197375 | 1 | 0 | 111 |
44,632,982 | 2017-06-19T14:15:00.000 | 0 | 1 | 0 | 0 | python,api,twitter,twitter-oauth,chatbot | 44,717,595 | 2 | true | 0 | 0 | Answering my own question.
Webhook isn't needed, after searching for long hours on Twitter Documentation, I made a well working DM bot, it uses Twitter Stream API, and StreamListener class from tweepy, whenever a DM is received, I send requests to REST API which sends DM to the mentioned recipient. | 1 | 0 | 0 | I am trying to build a twitter chat bot which is interactive and replies according to incoming messages from users. Webhook documentation is unclear on how do I receive incoming message notifications. I'm using python. | Does twitter support webhooks for chatbots or should i use Stream API? | 1.2 | 0 | 1 | 662 |
44,635,062 | 2017-06-19T15:56:00.000 | 12 | 0 | 1 | 0 | python,pycharm | 44,635,127 | 1 | true | 0 | 0 | You can use local history for this.
Right click on the file you want to revert, click Local History, then Show History. It's going to open a window with your current code versus previous version of your code and a side panel with the records stored. | 1 | 8 | 0 | I know that Pycharm autosaves changes.
I want to know if it's possible to revert changes back to the old file if I give some input time? So is it possible to revert to, say, 8:00AM file? | How to revert changes in Pycharm | 1.2 | 0 | 0 | 5,041 |
44,635,695 | 2017-06-19T16:34:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 70,358,084 | 3 | false | 0 | 0 | Define a function to calculate distances
calc_distance = lambda f, g: tf.norm(f-g, axis=1, ord='euclidean')
Pass your n*m vector to the function, example:
P = tf.constant([[1, 2], [3, 4], [2, 1], [0, 2], [2, 3]], dtype=tf.float32)
distances = calc_distance(P[:-1:], P[1::])
print(distances)
<tf.Tensor: shape=(4,), dtype=float32, numpy=array([2.8284273, 3.1622777, 2.2360682, 2.2360682], dtype=float32)> | 1 | 1 | 1 | I have a n*m tensor that basically represents m points in n dimensional euclidean space. I wanted calculate the pairwise euclidean distance between each consecutive point.
That is, if my column vectors are the points a, b, c, etc., I want to calculate euc(a, b), euc(b, c), etc.
The result would be an m-1 length 1D-tensor with each pairwise euclidean distance.
Anyone know who this can be performed in TensorFlow? | Tensorflow - Euclidean Distance of Points in Matrix | 0 | 0 | 0 | 2,501 |
44,636,780 | 2017-06-19T17:41:00.000 | 0 | 0 | 1 | 0 | python,json,csv | 50,038,603 | 2 | false | 0 | 0 | For easy tasks like this, using tabular simple data csv is quick and easy with a low footprint, for more complex entities json is better option. | 1 | 0 | 0 | I am working on a script in Python where I need to read a number of IP addresses classified in different sheets in excel file and I also have the option of doing the same form JSON file.
May I have suggestion on reading from what type of file will be faster? JSON or csv file in terms of performance? | Choosing between JSON file and csv file to read data from in python | 0 | 0 | 0 | 848 |
44,636,877 | 2017-06-19T17:47:00.000 | 0 | 0 | 0 | 0 | python,validation,machine-learning,neural-network,keras | 49,830,684 | 2 | false | 0 | 0 | You need to make sure that your network input is of shape (None,None,3), which means your network accepts an input color image of arbitrary size. | 2 | 5 | 1 | I'm using Keras to build a convolutional neural net to perform regression from microscopic images to 2D label data (for counting). I'm looking into training the network on smaller patches of the microscopic data (where the patches are the size of the receptive field). The problem is, the fit() method requires validation data to be of the same size as the input. Instead, I'm hoping to be able to validate on entire images (not patches) so that I can validate on my entire validation set and compare the results to other methods I've used so far.
One solution I found was to alternate between fit() and evaluate() each epoch. However, I was hoping to be able to observe these results using Tensorboard. Since evaluate() doesn't take in callbacks, this solution isn't ideal. Does anybody have a good way validating on full-resolution images while training on patches? | Training and validating on images with different resolution in Keras | 0 | 0 | 0 | 888 |
44,636,877 | 2017-06-19T17:47:00.000 | 0 | 0 | 0 | 0 | python,validation,machine-learning,neural-network,keras | 45,889,288 | 2 | false | 0 | 0 | You could use fit generator instead of fit and provide a different generator for validation set. As long as the rest of your network is agnostic to the image size, (e.g, fully convolutional layers), you should be fine. | 2 | 5 | 1 | I'm using Keras to build a convolutional neural net to perform regression from microscopic images to 2D label data (for counting). I'm looking into training the network on smaller patches of the microscopic data (where the patches are the size of the receptive field). The problem is, the fit() method requires validation data to be of the same size as the input. Instead, I'm hoping to be able to validate on entire images (not patches) so that I can validate on my entire validation set and compare the results to other methods I've used so far.
One solution I found was to alternate between fit() and evaluate() each epoch. However, I was hoping to be able to observe these results using Tensorboard. Since evaluate() doesn't take in callbacks, this solution isn't ideal. Does anybody have a good way validating on full-resolution images while training on patches? | Training and validating on images with different resolution in Keras | 0 | 0 | 0 | 888 |
44,639,106 | 2017-06-19T20:10:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 44,937,787 | 1 | false | 0 | 0 | I can't comment on the question because of low rep, so using an answer instead.
Can you clarify your question a bit, maybe with a small concrete example using very small tensors?
What are the "columns" you are referring to? You say that you want to keep 50 columns (presumably 50 numbers) per image. If so, the (10, 50) shape seems like what you want - it has 50 numbers for each image in the batch. The (10, 50, 20, 3) shape you mention would allocate 50 numbers to each "image_column x channel". That is 20*3*50 = 3000 numbers per image. How do you want to construct them from the 50 that you have?
Also, can you give a link to tf.batch_nd(). I did not find anything similar and relevant. | 1 | 0 | 1 | I have a tensor of shape (10, 100, 20, 3). Basically, it can be thought of as a batch of images. So the image height is 100 and width is 20 and channel depth is 3.
I have run some computations to generate a set of 10*50 indices corresponding to 50 columns I would like to keep per image in the batch. The indices are stored in a tensor of shape (10, 50). I would like to end up with a tensor of shape (10, 50, 20, 3).
I have looked into tf.batch_nd() but I can't figure out the semantics for how indices are actually used.
Any thoughts? | TensorFlow extracting columns | 0 | 0 | 0 | 78 |
44,642,495 | 2017-06-20T01:42:00.000 | 4 | 0 | 1 | 0 | python,c | 44,643,064 | 2 | true | 0 | 0 | In Python 2.7
To get integers or floats as inputs you can use the key word 'input'
Example: temp=input("Give your value")
Here temp only takes a float or int
There is another command raw_input() any value that raw input is given it converts it to string and assigns the value
Example:temp=raw_input("Give your value")
Here temp is of string type | 1 | 2 | 0 | I am trying to solve problems from SPOJ. I need to be able to read input from stdin for that, I did a lot of problems in C using scanf but wanted to try Python as well. How do i read the stdin inputs in Python? (wanna use Python 2.6/2.7) | c has scanf, does python have something similar? | 1.2 | 0 | 0 | 1,701 |
44,644,179 | 2017-06-20T05:02:00.000 | 0 | 0 | 1 | 0 | python | 44,644,207 | 2 | false | 0 | 0 | There's nothing wrong with having two versions of python installed, and it's actually quite common to do so. Usually, one would install them with different names (python vs python3, for example) to avoid confusion though. | 2 | 0 | 0 | I want to work with python 2 and python 3 version. if I install those version same OS what would be happened?
is there any wrong things with it? | Python:what happened if there are two python version install in the same OS? | 0 | 0 | 0 | 408 |
44,644,179 | 2017-06-20T05:02:00.000 | 0 | 0 | 1 | 0 | python | 44,644,252 | 2 | false | 0 | 0 | If you are using python from command line then you can run python command for python 2 and python3 command for python 3.
In ide such as pycharm you can select the project intepretor from settings. | 2 | 0 | 0 | I want to work with python 2 and python 3 version. if I install those version same OS what would be happened?
is there any wrong things with it? | Python:what happened if there are two python version install in the same OS? | 0 | 0 | 0 | 408 |
44,644,596 | 2017-06-20T05:35:00.000 | 0 | 0 | 0 | 0 | python,django,wagtail | 44,649,905 | 1 | false | 1 | 0 | Yes, that should work - I can't think of any issues that would prevent that approach from scaling to thousands of groups.
However, be aware that Wagtail doesn't currently provide complete isolation between sub-sites on the same Wagtail installation - users will still be able to see pages from other groups when they're choosing a page to link to. | 1 | 0 | 0 | Scenario: A User registers to my Wagtail site. This creates a Group, and a Page. This page wil be set Private 'Private, accessible to users in specific groups' The just created group will be set.
The user invites people to the group, and shares pages as childpages of the just created root-page, so only group members can acces it.
Would this scale? For like, hopefully, thousands of groups? Is this a way to separate content for a SAAS setup? | Wagtail Page Privacy 'Private, accessible to users in specific groups' as SAAS separation | 0 | 0 | 0 | 254 |
44,645,645 | 2017-06-20T06:42:00.000 | 6 | 0 | 1 | 0 | python,performance,dependencies,standard-library | 44,645,741 | 1 | true | 0 | 0 | The performance impacts are minimal.
Importing a module the first time loads the module bytecode and objects into memory (stored in the sys.modules mapping). That loading will take a small amount of time, and a small amount of memory.
You'd have to be a much larger project for that to start to matter. The Mercurial project, which cares deeply about start-up time (a command line client has to be responsive and fast), uses a lazy-loading scheme where imported module loading is deferred until actually accessed. This way the project can reference hundreds of modules (and extensions) but only actually load those that the current command line options require.
The alternative would be for your own code to define the functionality, but executing the bytecode for that would also take time and memory, but with the added downside that you are likely to introduce bugs or make design mistakes that the standard library has managed to eliminate over the years. | 1 | 3 | 0 | I was recently thinking about standard libraries and using them in my programming. And I got to wondering about calling libraries, I hear a lot of talk on dependencies and managing them in order to not overload your program with unnecessary modules and whatnot. So I was wondering if there is additional load/increase in resource use when using functions and modules from the standard library.
For instance, if I wrote a program that was entirely built of standard lib functions and none of my "own" code (meaning that I have a large amount of import statements), would I see a performance dropoff? Or is standard library loaded with every program, regardless of whether it's called or not? Hence it being part of the standard library.
Thanks guys, happy to elaborate on my question if I haven't been clear enough. | Is there a penalty for using built in libraries in Python? | 1.2 | 0 | 0 | 104 |
44,647,376 | 2017-06-20T08:09:00.000 | 0 | 0 | 1 | 0 | django,python-2.7,django-templates,django-views | 44,648,315 | 2 | false | 1 | 0 | if you need to see the JSON file via browser, you need to put it in the static folder. | 2 | 0 | 0 | My json file is in the same directory of view.py file.
However, when I open the html in browser, the IO error happens.
When I run view.py through pycharm, it works fine.
The JSon file is a database configuretion file. I want to display some data from the databse in that case I need to connect to database first. | Django app can not find json file in the same directory | 0 | 0 | 0 | 1,550 |
44,647,376 | 2017-06-20T08:09:00.000 | 2 | 0 | 1 | 0 | django,python-2.7,django-templates,django-views | 44,648,366 | 2 | true | 1 | 0 | With the information provided, it is really hard to give you a good answer.
Howevr, I assume, that you run your view in PyCharm by actually running views.py instead of using the runserver command? In this case, the execution directory is the folder of your app and a relative path to the JSON file can be resolved.
As soon as you run a development or real server and access the view in the browser using its URL, the execution directory is the firectory of the manage.py file (the root folder of your project). If you just access the name of your JSON file, then it cannot be found, since you are searching it in the wrong folder.
What are you using the JSON file for? If it should be available in the web borwser as well (as a resource for e.g. JavaScript), then you may want to put it into the static folder of your app or project (remember to run manage.py collectstatic then). In this case, the STATIC_ROOT would be defined in settings.py and you could resolve the path from there (using os.path.join(STATIC_ROOT, "path to json file") ) | 2 | 0 | 0 | My json file is in the same directory of view.py file.
However, when I open the html in browser, the IO error happens.
When I run view.py through pycharm, it works fine.
The JSon file is a database configuretion file. I want to display some data from the databse in that case I need to connect to database first. | Django app can not find json file in the same directory | 1.2 | 0 | 0 | 1,550 |
44,647,687 | 2017-06-20T08:25:00.000 | 1 | 0 | 0 | 0 | python-3.x,nameko | 44,650,675 | 1 | true | 1 | 0 | Yes, Nameko works with Python 3.
You just need to execute nameko run in an environment where Python 3 is the default (or only) interpreter. | 1 | 1 | 0 | nameko run --config ./foobar.yaml my_app
the above line defaults to to running my_app with python2. can I change it to python3? the documentation doesn't show this option but considering you get get nameko with pip3 it sounds reasonable. | can nameko use python3? | 1.2 | 0 | 0 | 232 |
44,648,000 | 2017-06-20T08:40:00.000 | 0 | 0 | 0 | 0 | python,activemq,message-queue,stomp | 44,658,880 | 1 | true | 1 | 0 | Yes, you can move messages between queues within the same broker, or across brokers.
Same broker use case:
Application sends message to queue1 on brokerA. Using ActiveMQ's Composite Destination support, you could configure brokerA to also deliver the message to queue2 on brokerA.
Different broker use case:
Application sends message to queue1 on brokerA, the messages are then passed queue2 on brokerB using a bridge, or an ActiveMQ network connector.
Additionally, code could read a message from queue1 on brokerA, perform some processing and then publish the message to queue2 on brokerB.
As Tim mentioned in his comment, additional detail identifying which use case applies to you is needed in order to recommend the best solution. | 1 | 0 | 0 | Is it possible to share data between queues in stomp? We have a new project and my task is pass data from one queue to another. Consider a system with ActiveMQ and Stomp. In this system I have find a message that return by queue1 and pass to queue2. This doesn't make much sense to me. Any advice about this issue would be appreciated.
Use Case:
I have an application like this:
queue1 : make query from ldap and find user
queue2 : make query from exchange server with given user
I want to use "user" founded from queue1 for query in queue2 | Pass data between queues - Stomp | 1.2 | 0 | 0 | 50 |
44,651,925 | 2017-06-20T11:33:00.000 | 2 | 0 | 1 | 0 | python,linux,windows,keyboard-shortcuts,interpreter | 44,653,688 | 1 | true | 0 | 0 | Normally, IDLE has an Option / Configure IDLE menu which allows you to remap almost any action to a key combination. The newline and indent action is by default mapped to Key Return and Num Keypad Return, while Ctrl J is used for plain newline and indent. But it is easy to change this mapping configuration. | 1 | 1 | 0 | Recently my Enter key stopped working. For sure it's a hardware problem!. However I managed so many days without Enter key by using the alternatives ctrl + j or ctrl + m .Running python programs was fine as I would run the script by saving it in a file. Now that I need to give commandline values I have to press enter for it to be accepted in the IDLE Interpreter. While typing this too I can't press enter or ctrl + j or ctrl + m.
But how did I do this? (This newline?) I copied a empty newline from another file. Even this doesn't work in the interpreter. Someone help any way to enter values in python IDLE Interpreter without actually using enter key.
One good alternative would be to use the cmd or terminal and using the command line python script.py. And then using ctrl+m as this works there.
But I miss the python interpreter. Any alternatives any suggestion?
Ofcourse onscreen keyboard is an option but I'm looking for any key alternatives to enter in python Interpreter. Is that even possible? | Alternative for 'enter' key in python interpreter? | 1.2 | 0 | 0 | 1,474 |
44,654,127 | 2017-06-20T13:14:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,google-cloud-sql | 44,715,500 | 2 | false | 1 | 0 | @Kevin Malachowski : Thanks for guiding me with your info and questions as It gave me new way of thinking.
Historical data records will be more than 0.3-0.5 million(maximum). Now I'll use BigQuery for historical advance search.
For live data-cloudSQL will be used as we must focus on perfomance for fetched data.
Some of performance issue will be there for historical search, when a user wants both results from live as well as historical data. (BigQuery is taking time near about 5-6 sec[or more] for worst case) But it will be optimized as per data and structure of the model. | 2 | 1 | 0 | I'm using google cloudSQL for applying advance search on people data to fetch the list of users. In datastore, there are data already stored there with 2 model. First is used to track current data of users and other model is used to track historical timeline. The current data is stored on google cloudSQL are more than millions rows for all users. Now I want to implement advance search on historical data including between dates by adding all history data to cloud.
If anyone can suggest the better structure for this historical model as I've gone through many of the links and articles. But cannot find proper solution as I have to take care of the performance for search (In Current search, the time is taken to fetch result is normal but when history is fetched, It'll scan all the records which causes slowdown of queries because of complex JOINs as needed). The query that is used to fetch the data from cloudSQL are made dynamically based on the users' need. For example, A user want the employees list whose manager is "[email protected]" , by using python code, the query will built accordingly. Now a user want to find users whose manager WAS "[email protected]" with effectiveFrom 2016-05-02 to 2017-01-01.
As I've find some of the usecases for structure as below:
1) Same model as current structure with new column flag for isCurrentData (status of data whether it is history or active)
Disadv.:
- queries slowdown while fetching data as it will scan all records.
Duplication of data might increase.
These all disadv. will affect the performance of advance search by increasing time.
Solution to this problem is to partition whole table into diff tables.
2) Partition based on year.
As time passes, this will generate too many tables.
3) 2 tables might be maintained.
1st for current data and second one for history. But when user want to search data on both models will create complexity of build query.
So, need suggestions for structuring historical timeline with improved performance and effective data handling.
Thanks in advance. | Google CloudSQL : structuring history data on cloudSQL | 0 | 1 | 0 | 81 |
44,654,127 | 2017-06-20T13:14:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,google-cloud-sql | 44,662,852 | 2 | false | 1 | 0 | Depending on how often you want to do live queries vs historical queries and the size of your data set, you might want to consider placing the historical data elsewhere.
For example, if you need quick queries for live data and do many of them, but can handle higher-latency queries and only execute them sometimes, you might consider periodically exporting data to Google BigQuery. BigQuery can be useful for searching a large corpus of data but has much higher latency and doesn't have a wire protocol that is MySQL-compatible (although it's query language will look familiar to those who know any flavor of SQL). In addition, while for Cloud SQL you pay for data storage and the amount of time your database is running, in BigQuery you mostly pay for data storage and the amount of data scanned during your query executions. Therefore, if you plan on executing many of these historical queries it may get a little expensive.
Also, if you don't have a very large data set, BigQuery may be a bit of an overkill. How large is your "live" data set and how large do you expect your "historical" data set to grow over time? Is it possible to just increase the size of the Cloud SQL instance as the historical data grows until the point at which it makes sense to start exporting to Big Query? | 2 | 1 | 0 | I'm using google cloudSQL for applying advance search on people data to fetch the list of users. In datastore, there are data already stored there with 2 model. First is used to track current data of users and other model is used to track historical timeline. The current data is stored on google cloudSQL are more than millions rows for all users. Now I want to implement advance search on historical data including between dates by adding all history data to cloud.
If anyone can suggest the better structure for this historical model as I've gone through many of the links and articles. But cannot find proper solution as I have to take care of the performance for search (In Current search, the time is taken to fetch result is normal but when history is fetched, It'll scan all the records which causes slowdown of queries because of complex JOINs as needed). The query that is used to fetch the data from cloudSQL are made dynamically based on the users' need. For example, A user want the employees list whose manager is "[email protected]" , by using python code, the query will built accordingly. Now a user want to find users whose manager WAS "[email protected]" with effectiveFrom 2016-05-02 to 2017-01-01.
As I've find some of the usecases for structure as below:
1) Same model as current structure with new column flag for isCurrentData (status of data whether it is history or active)
Disadv.:
- queries slowdown while fetching data as it will scan all records.
Duplication of data might increase.
These all disadv. will affect the performance of advance search by increasing time.
Solution to this problem is to partition whole table into diff tables.
2) Partition based on year.
As time passes, this will generate too many tables.
3) 2 tables might be maintained.
1st for current data and second one for history. But when user want to search data on both models will create complexity of build query.
So, need suggestions for structuring historical timeline with improved performance and effective data handling.
Thanks in advance. | Google CloudSQL : structuring history data on cloudSQL | 0 | 1 | 0 | 81 |
44,658,952 | 2017-06-20T16:53:00.000 | 2 | 0 | 0 | 0 | python,tkinter,ttk | 44,659,737 | 1 | true | 0 | 1 | The solution is to use the tkinter text widget. tkinter and ttk are designed to work together. | 1 | 1 | 0 | I noticed that the Text widget from tkinter is not present in the ttk widgets.
I'm using the ttk instead of tkinter because its interface suits better.
And I need Text widget because it has multiple lines unlikely the Entry widget.
Does anyone have a solution for my problem? | Python - Text widget from tkinter in ttk | 1.2 | 0 | 0 | 3,708 |
44,659,458 | 2017-06-20T17:24:00.000 | 0 | 0 | 0 | 0 | python,flask | 44,661,281 | 1 | true | 1 | 0 | As @davidism suggested, Config files are never meant to be tracked since these are your secret keys and anybody with access to your code will have access to your keys if tracked.
But, there is no hard and fast rule in Flask to keep your settings file in a specific location. You can keep them anywhere and name them anything.
But, when adding the config to the app, the correct file path must be given. | 1 | 0 | 0 | I've read that it's best practice for security reason to store things like API keys in the instance/settings.py file in Flask - why is this so and what is the mechanism that makes it so. I haven't been able to find much documentation about this online. | Why is more secure to store sensitive settings in instance settings instead config settings in Flask? | 1.2 | 0 | 0 | 49 |
44,662,278 | 2017-06-20T20:18:00.000 | 0 | 0 | 0 | 0 | python,opencv,dll,pip | 44,662,310 | 1 | false | 0 | 0 | Use the zip, extract it, and run sudo python3 setup.py install if you are on Mac or Linux. If on Windows, open cmd or Powershell in Admin mode and then run py -3.6 setup.py install, after cding to the path of the zip. If on Linux, you also have to run sudo apt-get install python-opencv. Maybe on Mac you have to use Homebrew, but I am not sure. | 1 | 3 | 1 | I attempted to install Opencv for python two ways,
A) Downloading the opencv zip, then copying cv2.pyd to /Python36/lib/site-packages.
B) undoing that, and using "pip install opencv-python"
/lib/site-packages is definitly the place where python is loading my modules, as tensorflow and numpy are there, but any attempt to "import cv2" leads to "ImportError: DLL Load Failed: The specified module could not be found"
I am at a loss, any help appreciated. And yes i have tried reinstalling VC redist 2015 | Pip and/or installing the .pyd of library to site-packages leads "import" of library to DLL load faliure | 0 | 0 | 0 | 4,066 |
44,663,086 | 2017-06-20T21:13:00.000 | 1 | 0 | 0 | 0 | python,tkinter,listbox | 44,663,142 | 1 | true | 0 | 1 | There is no simple way. A listbox can only contain text.
You can fake it fairly easily with a text widget, since a text widget allows you to embed widgets. For each row you could insert the text, insert a progress bar, and then insert a newline. | 1 | 0 | 0 | I have a listbox, and I would like to have a progress bar inside each line of this listbox. Is there a simple way to do that, or do I have to rewrite the listbox class, or maybe override it ? | Customize tkinter listbox by adding progress bars | 1.2 | 0 | 0 | 202 |
44,663,094 | 2017-06-20T21:14:00.000 | 1 | 0 | 1 | 0 | python,multithreading,python-3.x,user-interface,mmo | 44,664,163 | 2 | true | 0 | 1 | Using tkinter and .after, I wrote a little single-threaded program that displays 1000 ovals that move randomly across the screen, updating every 50ms, and I don't see any lag. At 10ms I think I maybe see a tiny bit of lag.
These are simple objects with very little math to calculate the new positions, and there's no collision detection except against the edges of the window.
The GUI seems responsive. I am able to type into a text window in the same GUI as fast as I can.
I don't know how that compares to what you want to do. | 1 | 0 | 0 | I'm a bot developer but I'm new to Python. I've been reading for hours now, planning the design of a new bot.
I'd like your opinion on performance issues with running a GUI and a very fast core loop to keep a modest array of game entities updated.
The bot consists of a main array in an infinite loop which continually updates and also runs calculations. I know from my past bots that GUIs provide a problem with performance as they need to wait to detect events.
I've looked at the possibility of a second thread, but I read that tkinter doesn't like more than one thread.
I've checked out the idea of using .after to run the core array loop from the tkinter mainloop but my gut tells me that this would be bad practice.
Right now I feel all I can do is try to contain the GUI and the core array all in one loop but this has never been good for performance.
Are there better ways of designing the structure of this bot that I have not discovered?
Edit
I decided on removing the mainloop from tinker and simply using .update() to update any gui components I have, right now, this only consists of some labels which overlay the game screen.
Both the labels and the bot's functions run great so far. | Python - Updating core array loop < 100ms and keeping the tkinter GUI responsive | 1.2 | 0 | 0 | 128 |
44,664,221 | 2017-06-20T22:52:00.000 | 3 | 0 | 0 | 0 | python,user-interface,tkinter | 44,664,589 | 1 | false | 0 | 1 | The .geometry() method of the root (or any Toplevel) window is your interface to its size and position. Call it with no parameters to get a geometry string; you can then call the method with that string to set the window to the same place on the screen. For example, I get '200x200+5+28' for a small window on my main monitor, '200x200+2592+414' after moving it to the monitor on the right. (This is on Mac OS X, conceivably it works differently on other platforms.) | 1 | 1 | 0 | I'm running Tkinter on a machine that has 3 monitors. Is there a way to specify which monitor (& the location on that monitor) to display my GUI? | Tkinter: Issue with Multiple Montiors | 0.53705 | 0 | 0 | 26 |
44,667,086 | 2017-06-21T04:52:00.000 | 0 | 0 | 0 | 0 | python,user-interface,automation | 44,670,539 | 2 | false | 0 | 0 | For that specific command, you should not need any automation tool to feed input to your script. Piping a file to it should allow it to execute without user interaction (like Coldspeed said in his comment).
Most command line interfaces allow parametrized execution and most parameters you can either build into your script or read them from a config file somewhere.
For those command line tools that require "real" user interaction (i.e. you can't pipe the input, parametrize it or somehow build it into the command itself), I used the pexpect module with great success. | 1 | 0 | 0 | When i execute the below command, it usually asks for the user input. How can we automate the user interaction in python script.
os.system("openssl req -new -x509 -key privkey.pem -out cacert.pem -days 1095") | How can we automate the user interaction in python script | 0 | 0 | 1 | 322 |
44,668,401 | 2017-06-21T06:32:00.000 | 1 | 0 | 0 | 0 | python,django,selenium,beautifulsoup,scrapy | 44,668,515 | 1 | false | 1 | 0 | "The idea is to create a variety of methods that can be run by
clicking a button and/or inserting content in inputs, and having the
tests / functions run and returning the specified results."
Maybe Flask can help you here. It has a nice functionality to route urls to specific functions or methods in your code. It provides a convenient way to bind actions to code. You can search the web for how you can leverage Flask to cater to your specific needs, but here I just wanted to convey that it would help. | 1 | 0 | 0 | I want to create a web app for my company that functions like a tool suite for non-technical employees to use. I'm using
Python
Selenium WebDriver,
BeautifulSoup,
Srapy,
Django
Is it possible and is this the right approach? | Selenium / Web crawling / Web scraping App in Python | 0.197375 | 0 | 1 | 322 |
44,668,928 | 2017-06-21T07:01:00.000 | 2 | 1 | 0 | 0 | python,redis,redis-cluster,redis-py | 44,669,252 | 1 | false | 0 | 0 | There is no command that sets the TTL for multiple keys, in the fashion that MSET works. You can, however, replace the call to MSET with a Lua script that does SETEX for each key and value passed to it as parameters. | 1 | 1 | 0 | I am using mget(keys, *args) to bulk set keys.
I also want to set expiration time to keys. The reason I am using mset is to save calls to redis.
Is there a way to bulk set keys with expiration ?
Thanks. | Is it possible to set expiry to redis keys (bulk operation) | 0.379949 | 0 | 0 | 2,243 |
44,669,297 | 2017-06-21T07:19:00.000 | 1 | 0 | 0 | 0 | python,html,css,django | 44,669,492 | 1 | false | 1 | 0 | I don't know your folder layout, however have you tried providing it with the full URL to the file? If that doesn't work, provided it's sensible for your folder structure you can try relative paths, for instance "../images/one.png" | 1 | 0 | 0 | I was trying to set background image in my website
but in style.css file is not visible it has this error is "Cannot reslove directory 'Images'
CSS:
background: white url("images/one.png") no-repeat right bottom;
index.html:
{% load staticfiles %}
<link rel="stylesheet" type="text/css" href="{% static 'music/style.css' %}"/>
I want to select image from music/images/one.png
Thanks is advance | Background image add in website in Django | 0.197375 | 0 | 0 | 128 |
44,670,758 | 2017-06-21T08:30:00.000 | 0 | 1 | 0 | 0 | python,python-3.x,selenium-webdriver | 44,672,323 | 2 | false | 0 | 0 | python -c 'for i in range(10): print "hello"'
tested.
or your shell
for i in `seq 10`;do echo hello; done | 1 | 2 | 0 | I have a script which is an automated test. I need to be able to set a run to be X cycles long (or even infinite). What is the best method of doing so ?
By the way, currently I am using my IDE to run the whole script and sometimes I use the CLI to run certain code chunks.
Which will be needed for my needs | What is the recommended way to run a script X times in a row (one at a time) | 0 | 0 | 0 | 1,877 |
44,671,776 | 2017-06-21T09:16:00.000 | 1 | 1 | 0 | 0 | python,pytest,codenvy,eclipse-che | 44,780,474 | 2 | false | 1 | 0 | pytest is not currently in the test framework like JUnit is but since you have a command line you could run your tests manually or create a custom command from the commands tab on the left.
click the "+" under "TEST"
replace the "Command Line" entry that reads echo "hello" with
python ${current.project.relpath}
click save
With your python test file selected in the Project Explorer, click run on your new custom test shift+F10. | 2 | 0 | 0 | Self explanatory question here.
Is this even possible? Can't find any documentation regarding this anywhere..
And if not, how complicated is it to write a plugin?
Thanks! | Is it possible to run Py.Test tests on Eclipse Che/Codenvy? | 0.099668 | 0 | 0 | 219 |
44,671,776 | 2017-06-21T09:16:00.000 | 0 | 1 | 0 | 0 | python,pytest,codenvy,eclipse-che | 45,098,100 | 2 | false | 1 | 0 | Correct, there's no built in UI to run python tests. However, you may use command line approach. | 2 | 0 | 0 | Self explanatory question here.
Is this even possible? Can't find any documentation regarding this anywhere..
And if not, how complicated is it to write a plugin?
Thanks! | Is it possible to run Py.Test tests on Eclipse Che/Codenvy? | 0 | 0 | 0 | 219 |
44,674,578 | 2017-06-21T11:16:00.000 | 0 | 0 | 1 | 0 | python,function,methods | 44,674,714 | 1 | false | 0 | 0 | In my opinion, If the functionality should be part of an object behavior and it's related only to that object than implement it as a method at the class scope, for example, only str should have functions like lower.
When the function can be applied to a lot of class types like len can be applied on lists or on str than it should be a function. | 1 | 0 | 0 | What are the advantages of having a python method, like lower(), being applied to an appropriate object, s, via the dot notation (i.e., s.lower()) versus a method, like len(), being also applied appropriately receiving the object as an argument, e.g. len(s) ? If I want to create such a method / function, what are the critical points I should consider choosing the first or the second implementation? Thanks! | Python method object application | 0 | 0 | 0 | 33 |
44,677,753 | 2017-06-21T13:37:00.000 | 0 | 0 | 1 | 0 | python,rpm,packaging,distutils,rpm-spec | 46,433,084 | 1 | false | 0 | 0 | Any answer likely depends on the distro for which the rpm was built. A generic, albeit manual approach, would to start with rpm -q --requires $PACKAGE but as you already have the spec file, you can simply rpmspec -q --requires *spec to get that same info. Look for the packages providing Python resources, e.g., python3-requests. You'll need to translate each of these into the Python package name, e.g., 'requests' for your setup.py. You may find that rpm -q --provides python3-requests to be useful at this step; maybe not. | 1 | 0 | 0 | Basically I'm working on porting a program from being packaged with RPM into using setup.py to package it as a wheel. My core question is whether there exists some guide or tool on how to make this conversion.
The key issue is that I'm looking to convert dependencies as specified by RPM's spec file to setup.py and can't find any information online as to how to do this. | How to convert RPM spec file dependencies to Python setup.py? | 0 | 0 | 0 | 497 |
44,677,865 | 2017-06-21T13:41:00.000 | 0 | 0 | 1 | 0 | python,centos,pycharm | 44,677,981 | 2 | false | 0 | 0 | How did you install python-3.6.1? Also, why not search for the python3 executable? I don't specifically know about Centos, but on UNIX systems it could be in /usr/bin/python3 | 1 | 0 | 0 | By default I had python-2.6 installed on my Centos. I installed python-3.6.1 and Pycharm IDE. When I open settings of my Pycharm I can't see new interpreter for python-3.6.1. How do I locate and add the new interpreter? | Where to locate Python-3.6.1 interpreter in Pycharm on Centos | 0 | 0 | 0 | 937 |
44,678,133 | 2017-06-21T13:53:00.000 | 0 | 0 | 0 | 0 | python,selenium,selenium-chromedriver,user-agent | 44,678,533 | 1 | true | 0 | 0 | I would go with creating a new driver and copy all the necessary attributes from the old driver except the user agent. | 1 | 0 | 0 | I am running some code with selenium using python, and I figured out that I need to dynamically change the UserAgent after I already created the webdriver. Any advice if it is possible and how this could be done? Just to highlight - I want to change it on the fly, after almost each GET or POST request I send | Python selenium with chrome webdriver - change user agent | 1.2 | 0 | 1 | 1,149 |
44,678,706 | 2017-06-21T14:17:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,virtualenv,virtualenvwrapper | 44,679,103 | 4 | false | 0 | 0 | Requirements:
Virtual Env
Pycharm
Go to Virtual env and type which python
Add remote project interpreter (File > Default Settings > Project Interpreter (cog) add remote)
You'll need to set up your file system so that PyCharm can also open the project.
NOTE:
do not turn off your virtual environment without saving your run configurations that will cause pycharm to see your run configurations as corrupt
There's a button on the top right that reads share enable this and your run configs will be saved to a .idea file and you'll have a lot less issues | 2 | 4 | 0 | been searching for this with no success, i don't know if i am missing something but i have a virtualenv already but how do i create a project to associate the virtualenv with, thanks
P.S. Am on windows | Associating a python project with a virtual environment | 0.099668 | 0 | 0 | 3,130 |
44,678,706 | 2017-06-21T14:17:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,virtualenv,virtualenvwrapper | 44,679,249 | 4 | false | 0 | 0 | If you already have your virtualenv installed you just need to start using it.
Create your projects virtual environment using virtualenv env_name on cmd. To associate a specific version of python with your environment use: virtualenv env_name -p pythonx.x;
Activate your environment by navigating into its Scripts folder and executing activate.
Your terminal now is using your virtual environment, that means every python package you install and the python version you run will be the ones you configured inside your env.
I like to create environments with the names similar to my projects, I always use one environment to each project, that helps keeping track of which packages my specific projects need to run.
If you haven't read much about venvs yet, try googling about requirements.txt along with pip freeze command those are pretty useful to keep track of your project's packages. | 2 | 4 | 0 | been searching for this with no success, i don't know if i am missing something but i have a virtualenv already but how do i create a project to associate the virtualenv with, thanks
P.S. Am on windows | Associating a python project with a virtual environment | 0.049958 | 0 | 0 | 3,130 |
44,679,656 | 2017-06-21T14:58:00.000 | 1 | 0 | 0 | 0 | python,http,scapy | 44,703,791 | 3 | false | 0 | 0 | Yes, you can. You can filter by TCP port 80 (checking each packet or using BPF) and then check the TCP payload to ensure there is an HTTP header. | 1 | 2 | 0 | I am trying to make a filter for packets that contain HTTP data, yet I don't have a clue on how to do so.
I.E. Is there a way to filter packets using Scapy that are only HTTP? | Using Scapy to fitler HTTP packets | 0.066568 | 0 | 1 | 3,860 |
44,682,984 | 2017-06-21T17:50:00.000 | 3 | 0 | 0 | 0 | python-2.7,tls1.2,pyopenssl | 44,884,465 | 1 | false | 0 | 0 | If you are looking at the date, you can say [25 Aug 2016] since openssl v1.1.0 is released as you mentioned.
This is not a problem with the version of pyopenssl, but rather with the version of the openssl library you are using. Since pyopenssl, a wrapper library of openssl for python, doesn't modify openssl itself. so, if you are using openssl v1.1.0 or later, TLS v1.2 is the default, otherwise TLS v1.2 is not the default. | 1 | 2 | 0 | I want to know when exactly was it added that the default cipher selection would be TLSv1.2 ciphers while using the wrapper function of python's openssl.
I was able to find this data in the changelog of openssl. It seems that even though TLSv1.2 support was added on 14th March, 2012, it was made a default option only on 25th August, 2016.
Changes between 1.0.0h and 1.0.1 [14 Mar 2012]
*) Initial TLS v1.2 support. Add new SHA256 digest to ssl code, switch
to SHA256 for PRF when using TLS v1.2 and later. Add new SHA256 based
ciphersuites. At present only RSA key exchange ciphersuites work with
TLS v1.2. Add new option for TLS v1.2 replacing the old and obsolete
SSL_OP_PKCS1_CHECK flags with SSL_OP_NO_TLSv1_2. New TLSv1.2 methods
and version checking.
Changes between 1.0.2h and 1.1.0 [25 Aug 2016]
*) Changes to the DEFAULT cipherlist:
- Prefer (EC)DHE handshakes over plain RSA.
- Prefer AEAD ciphers over legacy ciphers.
- Prefer ECDSA over RSA when both certificates are available.
- Prefer TLSv1.2 ciphers/PRF. <---
- Remove DSS, SEED, IDEA, CAMELLIA, and AES-CCM from the
default cipherlist.
[Emilia Käsper]
[Steve Henson]
But this is for the openssl library.
For pyopenssl, the changelog seems less detailed, there is an update and version release (version 16.1.0) just after a day (on 26th August, 2016), but they have not mentioned this detail. And I am not able to find a mention of which exact version of openssl is the pyopenssl using in the changelog history. | When was TLSv1.2 ciphers were added as a default selection while using python's openssl module | 0.53705 | 0 | 0 | 461 |
44,686,664 | 2017-06-21T21:36:00.000 | -1 | 0 | 1 | 0 | python,xlsx,xlrd,online-compilation | 72,494,283 | 3 | false | 0 | 0 | import tabula
Read a PDF File
df = tabula.read_pdf("file:///C:/Users/tanej/Desktop/salary.pdf", pages='all')[0]
convert PDF into CSV
tabula.convert_into("file:///C:/Users/tanej/Desktop/salary.pdf", "file:///C:/Users/tanej/Desktop/salary.csv", output_format="csv", pages='all')
print(df) | 2 | 0 | 0 | Me and my group are currently working on a school project where we need to use an online python compiler, since we are not allowed to install or download any software on their computers. The project requires me to read data from a .xlsx file.
Is there any online IDE with xlrd that can read the file that is on the school's computer?
I've been looking at a few but can't seem to find any that has this support. On tutorialspoint.com it is possible to upload the excel file but not import xlrd. Other sites has xlrd but doesn't allow for uploading files to the site. | Read excel file with an online Python compiler with xlrd | -0.066568 | 1 | 0 | 2,482 |
44,686,664 | 2017-06-21T21:36:00.000 | 0 | 0 | 1 | 0 | python,xlsx,xlrd,online-compilation | 44,687,122 | 3 | false | 0 | 0 | Could the pandas package and its pandas.read_clipboard function help? You'd need to copy the content of the file manually to the clipboard before starting your script.
Alternatively - is it considered cheating to just rent a server? Pretty cheap these days.
Finally: you don't usually require admin rights to install Python... so assuming it's not a violation of school policy, the Anaconda distribution for instance is very happy to be installed for the local user only. | 2 | 0 | 0 | Me and my group are currently working on a school project where we need to use an online python compiler, since we are not allowed to install or download any software on their computers. The project requires me to read data from a .xlsx file.
Is there any online IDE with xlrd that can read the file that is on the school's computer?
I've been looking at a few but can't seem to find any that has this support. On tutorialspoint.com it is possible to upload the excel file but not import xlrd. Other sites has xlrd but doesn't allow for uploading files to the site. | Read excel file with an online Python compiler with xlrd | 0 | 1 | 0 | 2,482 |
44,687,986 | 2017-06-21T23:56:00.000 | 0 | 0 | 1 | 0 | python,cmd,pip,virtualenv,anaconda | 44,695,179 | 1 | true | 0 | 0 | A simple solution to this (which does work) would be to use Powershell as an Administrator, instead of cmd.
Conversely, use cmd as an administrator, though I would recommend using the much-more powerful Powershell for any and all purposes!
Why this works:
A lot of commands need super-user rights (think root/ sudo in linux) in order to be properly executed.
Since there is no such thing as as sudo in Windows yet, you can implement it via admin privileges.
Cheers! | 1 | 2 | 0 | I tried to set up a virtual environment for my project by executing virtualenv myenv. The folder seemed to be generated, but the command hung, and I couldn't execute another command. I had to close the console and restart cmd. The folder was generated, as I said, but I couldn't activate the virtual environment by venv\Scripts\activate.
I met the same behaviour while trying to execute pip freeze > requirements.txt. The file was generated, but it was empty, although I used a lot of packages in my project. When I executed just pip freeze, the list of packages was printed, but the command hung again, and I had to close the console again.
I tried both procedures many times, but with no success. I tried that in Windows cmd and Anaconda Prompt (Anaconda version: Anaconda3 2.4.1; Python: 3.5.1).
EDIT: when I tried to do this for the very first time some days ago, I succeeded in activating the virtual environment, but only for one time. | Python hangs while executing pip and virtualenv and brings no results | 1.2 | 0 | 0 | 753 |
44,692,663 | 2017-06-22T07:18:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 44,692,941 | 1 | false | 0 | 0 | If you are using a Virtual environment do pip freeze and check whether "wit" package is installed or not.
If "wit" is installed open ipython and check whether you can import the "wit" package.
If "wit" is not imported in ipython check if any dependency packages are need for "wit". | 1 | 0 | 0 | I'm using python version 3.6.1
I want to use method of one python file to another located in a same directory. I have used from utils import wit_response in app.py file. And when I compile it this shows an error:
Traceback (most recent call last):
File "app.py", line 4, in
from utils import wit_response
File "E:\Study\Python\fbmessengerbot\utils.py", line 1, in
from wit import Wit
ModuleNotFoundError: No module named 'wit'
In utils.py file I'm using from wit import Wit. The wit package i have already installed.
How can i resolve this.
Thanks in advance. | Error: How to import method from one file to other in a same directory | 0 | 0 | 0 | 281 |
44,692,668 | 2017-06-22T07:19:00.000 | 13 | 0 | 1 | 0 | python,virtualenv,pyenv | 44,700,745 | 3 | true | 0 | 0 | Use pip freeze > requirements.txt to save a list of installed packages.
Create a new venv with python 3.6.
Install saved packages with pip install -r requirements.txt. When pip founds an universal wheel in its cache it installs the package from the cache. Other packages will be downloaded, cached, built and installed. | 2 | 19 | 0 | I used pyenv, pyenv-virtualenv for managing python virtual environment.
I have a project working in Python 3.4 virtual environment.
So all installed packages(pandas, numpy etc) are not newest version.
What I want to do is to upgrade Python version from 3.4 to 3.6 as well as upgrade other package version to higher one.
How can I do this easily? | Python: How can I update python version in pyenv-virtual-environment? | 1.2 | 0 | 0 | 15,968 |
44,692,668 | 2017-06-22T07:19:00.000 | -2 | 0 | 1 | 0 | python,virtualenv,pyenv | 44,692,794 | 3 | false | 0 | 0 | If you use anaconda, just type
conda install python==$pythonversion$ | 2 | 19 | 0 | I used pyenv, pyenv-virtualenv for managing python virtual environment.
I have a project working in Python 3.4 virtual environment.
So all installed packages(pandas, numpy etc) are not newest version.
What I want to do is to upgrade Python version from 3.4 to 3.6 as well as upgrade other package version to higher one.
How can I do this easily? | Python: How can I update python version in pyenv-virtual-environment? | -0.132549 | 0 | 0 | 15,968 |
44,693,301 | 2017-06-22T07:48:00.000 | 0 | 1 | 1 | 0 | caching,python-sphinx | 44,693,633 | 1 | false | 0 | 0 | Because Sphinx imports the module, it does not seem to find the local copy, but rather imports the older version already installed on my system, and uses that to generate the docs. Running python setup.py install, and then regenerating everything finally worked. | 1 | 2 | 0 | I am trying to use Sphinx to create simple documentation for a python module. The first time I ran it, it worked fine. Now I have made some updates to the module, and rerunning the documentation commands:
$ sphinx-apidoc -P -F -f -e -o . /path/to/module
$ make html
it always uses the old version of the python module code. I have tried deleting the entire docs directory, moving the module, rechecking it out, updating sphinx - nothing works.
The old code is still being reused and cached somewhere. It is driving me absolutely insane. | Sphinx is caching python module somewhere: WHERE? | 0 | 0 | 0 | 363 |
44,696,136 | 2017-06-22T09:59:00.000 | 1 | 1 | 0 | 0 | python | 44,696,211 | 3 | false | 0 | 0 | Well, you need a mail server. Either locally, on your machine, or somewhere on the internet. This doesn't have to be gmail. | 1 | 1 | 0 | I am using Debian 8 and I would like to be able to only send mail via python without installing a full blown mail server system like postfix or without using gmail.
I can only see tutorials to send mails with python with full mail system server or via gmail or other internet mail system. Isn't it possible to just send an email and don't care about receiving any?
Thanks. | Sending mail via python | 0.066568 | 0 | 0 | 197 |
44,697,036 | 2017-06-22T10:39:00.000 | 10 | 0 | 1 | 1 | python,python-3.x,python-2.7 | 44,697,197 | 7 | false | 0 | 0 | On Debian (and derived distributions, like Ubuntu) install pydoc package. Then you can use pydoc whatever command. | 2 | 12 | 0 | Is there a way to install the python documentation that would make it available as if it was a manpage? (I know you can download the sourcefiles for the documentation and read them in vim, using less or whatever but I was thinking about something a bit less manual. Don't want to roll my own.) | Reading python documentation in the terminal? | 1 | 0 | 0 | 17,218 |
44,697,036 | 2017-06-22T10:39:00.000 | 0 | 0 | 1 | 1 | python,python-3.x,python-2.7 | 72,360,631 | 7 | false | 0 | 0 | I will answer since I don't satisfied with the accepted answer. Probably because I don't use IDLE.
Note that, I use Ubuntu (terminal).
Probably in other OSs, it works the same.
Is there a way to install that? No need.
I found that it comes by default.
How to access that?
Use help() command in python' shell.
In the shell, type command help().
Now that you're in the help utility, enter anything that you want to read its documentation. Press q to quit the documentation and type quit to quit the help utility. | 2 | 12 | 0 | Is there a way to install the python documentation that would make it available as if it was a manpage? (I know you can download the sourcefiles for the documentation and read them in vim, using less or whatever but I was thinking about something a bit less manual. Don't want to roll my own.) | Reading python documentation in the terminal? | 0 | 0 | 0 | 17,218 |
44,697,947 | 2017-06-22T11:22:00.000 | 0 | 1 | 0 | 0 | python,zerorpc | 44,765,954 | 1 | true | 0 | 0 | What IP address are you binding the server onto? If you want to listen on all interfaces and all addresses, something like tcp://0.0.0.0:4242 should work. | 1 | 0 | 0 | I run a ZeroRPC server and I can connect successfully with a client to the 127.0.0.1 IP.
However when I use the public IP of the server to the client I get the following error:
zerorpc.exceptions.LostRemote: Lost remote after 10s heartbeat
I have opened the port from the firewall (using ufw on Ubuntu) but still get the same error.
Do you have any ideas what the problem might be?
Thanks!! | python ZeroRPC heartbeat error on public IP | 1.2 | 0 | 0 | 311 |
44,698,229 | 2017-06-22T11:33:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine | 62,299,368 | 2 | false | 1 | 0 | Try
gcloud app deploy dispatch.yaml
...to connect services to dispatch rules. | 1 | 0 | 0 | I edited my dispatch.yaml and deployed on app engine using
appcfg.py update_dispatch .
But when I go and see source code under StackDriver debug, I don't see the change.
Why the changes doesn't get reflected. But when I deploy complete app by appcfg.py update . the changes get reflected.
But in case, If I only want to update dispatch how do I do??? | dispatch.yaml not getting updated | 0.099668 | 0 | 0 | 306 |
44,698,632 | 2017-06-22T11:51:00.000 | 4 | 0 | 0 | 0 | python,arrays,numpy,vectorization | 44,698,955 | 1 | true | 0 | 0 | For elementwise multiplication it does not matter, and flattening the array does not change a thing. Remember: Arrays, no matter their dimension, are saved linearly in RAM. If you flatten the array before multiplication, you are only changing the way NumPy presents the data to you, the data in RAM is never touched. Multiplying the 1D or the 100D data is exactly the same operation. | 1 | 1 | 1 | I'm working with multidimensional matrices (~100 dimensions or so, see below why). My matrix are NumPy arrays and I mainly multiply them with each other.
Does NumPy care (with respect to speed or accuracy) in what form I ask it to multiply these matrices? I.e. would it make sense to reshape them into a linear array before performing the multiplication? I did some own test with random matrices, and it seemed to be irrelevant, but would like to have some theoretical insight into this.
I guess there is a limit to how large matrices can be and how large they can be, before Python becomes slow handling them. Is there a way to find this limit?
I have several species (biology) and want to assign each of these species a fitness. Then I want to see how these different finesses affect the outcome of competition. And I want to check for all possible fitness combinations of all species. My matrices have many dimensions, but all dimensions are quite small. | What are the limits of vectorization? | 1.2 | 0 | 0 | 115 |
44,699,889 | 2017-06-22T12:49:00.000 | 1 | 0 | 0 | 0 | python,xgboost | 45,310,833 | 2 | true | 0 | 0 | Finally I have solved this issue by:
model.booster().get_score(importance_type='weight') | 1 | 0 | 1 | I am trying to perform features selection (for regression tasks) by XGBRegressor().
More precisely, I would like to know:
If there is something like the method
feature_importances_, utilized with XGBClassifier, which I could use for regression.
If the XGBoost's method plot_importance() is reliable when it is used with XGBRegressor() | XGBoost - Feature selection using XGBRegressor | 1.2 | 0 | 0 | 2,264 |
44,703,003 | 2017-06-22T14:59:00.000 | 1 | 0 | 1 | 0 | python,multithreading | 44,703,268 | 1 | true | 0 | 0 | Multiprocessing is generally for when you want to take advantage of the computational power of multiple processing cores. Multiprocessing limits your options on how to handle shared state between components of your program, as memory is copied initially on process creation, but not shared or updated automatically. Threads execute from the same region of memory, and do not have this restriction, but cannot take advantage of multiple cores for computational performance. Your application does not sound like it would require large amounts of computation, and simply would benefit from concurrency to be able to handle user input, networking, and a small amount of processing at the same time. I would say you need threads not processes. I am not experienced enough with asyncio to give a good comparison of that to threads.
Edit: This looks like a fairly involved project, so don't expect it to go perfectly the first time you hit "run", but definitely very doable and interesting.
Here's how I would structure this project...
I see effectively four separate threads here (maybe small ancillary dameon threads for stupid little tasks)
I would have one thread acting as your temperature controller (PID control / whatever) that has sole control of the heater output. (other threads get to make requests to change setpoint / control mode (duty cycle / PID))
I would have one main thread (with a few dameon threads) to handle the data logging: Main thead listens for logging commands (pause, resume, get, etc.) dameon threads to poll thermometer, rotate log files, etc..
I am not as familiar with networking, and this will be specific to your client application, but I would probably get started with http.server just for prototyping, or maybe something like websockets and a little bit of asyncio. The main thing is that it would interact with the data logger and temperature controller threads with getters and setters rather than directly modifying values
Finally, for the keypad input, I would likely just make up a quick tkinter application to grab keypresses, because that's what I know. Again, form a request with the tkinter app, but don't modify values directly; use getters and setters when "talking" between threads. It just keeps things better organized and compartmentalized. | 1 | 1 | 0 | I am trying to build a temperature control module that can be controlled over a network or with manual controls. the individual parts of my program all work but I'm having trouble figuring out how to make them all work together.also my temperature control module is python and the client is C#.
so far as physical components go i have a keypad that sets a temperature and turns the heater on and off and an lcd screen that displays temperature data and of course a temperature sensor.
for my network stuff i need to:
constantly send temperature data to the client.
send a list of log files to the client.
await prompts from the client to either set the desired temperature or send a log file to the client.
so far all the hardware works fine and each individual part of the network functions work but not together. I have not tried to use both physical and network components.
I have been attempting to use threads for this but was wondering if i should be using something else?
EDIT:
here is the basic logic behind what i want to do:
Hardware:
keypad takes a number inputs until '*' it then sets a temp variable.
temp variable is compared to sensor data and the heater is turned on or off accordingly.
'#' turns of the heater and sets temp variable to 0.
sensor data is written to log files while temp variable is not 0
Network:
upon client connect the client is sent a list of log files
temperature sensor data is continuously sent to client.
prompt handler listens for prompts.
if client requests log file the temperature data is halted and the file sent after which the temperature data is resumed.
client can send a command to the prompt handler to set the temp variable to trigger the heater
client can send a command to the prompt handler to stop the heater and set temp variable to 0
commands from either the keypad or client should work at all times. | should i be using threads multiprocessing or asycio for my project? | 1.2 | 0 | 0 | 53 |
44,703,138 | 2017-06-22T15:05:00.000 | 0 | 0 | 0 | 0 | python,pandas,beautifulsoup | 44,703,767 | 2 | false | 1 | 0 | Another idea would be to use BeautifulSoup library first and get all the table elements from a webpage and then apply pd.read_html() | 1 | 0 | 0 | I'm looking for ways to scrape all tables on a certain website. The tables are formatted exactly the same in all subpages. The problem is, the urls of those subpages are in this way:
url1 = 'http.../Tom',
url2 = 'http.../Mary',
url3 = 'http.../Jason', such that I cannot set a loop by altering the url incrementally. Are there any possible ways to solve this by pandas? | Is it possible to use pandas to scrape html tables over multiple web pages? | 0 | 0 | 1 | 424 |
44,703,243 | 2017-06-22T15:09:00.000 | 0 | 0 | 1 | 0 | python,django | 44,719,815 | 2 | true | 1 | 0 | Finally got the hang .The problem is with windows , not the installation process. For the same task the commands doesn't remain the same in everyone's case as changing the command from django-admin --version to C:>path to installation directory>python django-admin-script.py --version from the script directory worked in my case. | 1 | 0 | 0 | PROBLEM
Even after installation of django, typing the command django-admin --version shows up the message failed to create the process.
what I've done
I have installed python version3.6.1 ,pip version 9.0.1 ,easy_install version28.0.1 and then installed django version 1.9 using easy_install.
In the Environment variables of mycomputer I've set the PATH both to python folder and the scripts folder. | django1.9 after installation is not recognised by the command prompt | 1.2 | 0 | 0 | 83 |
44,705,077 | 2017-06-22T16:39:00.000 | 0 | 0 | 0 | 0 | python,opencv,cmake | 44,717,895 | 1 | false | 0 | 1 | The problem was an old version of the module lurking an a different folder where the python script was actually looking. This must have been created in the past with an OpenCV 3.1 environment. | 1 | 0 | 0 | I'm trying to run a python script that uses a custom module written by someone else. I created that module by running CMake according to the creator's instructions. Running my python script, I get the error: ImportError: libopencv_imgproc.so.3.1: cannot open shared object file: No such file or directory. This error is caused by the module I created earlier.
There is no file of that name since I have OpenCV 3.2.0 installed, so in usr/local/lib there's libopencv_imgproc.so.3.2.0. I don't know how to fix this or where to start looking. The CMakeLists.txt of the module has a line
find_package(OpenCV 3 COMPONENTS core highgui imgproc REQUIRED).
I tried changing it to
find_package(OpenCV 3.2.0 COMPONENTS core highgui imgproc REQUIRED),
without success. | How can I force CMake to use the correct OpenCV version? | 0 | 0 | 0 | 430 |
44,706,353 | 2017-06-22T17:55:00.000 | 3 | 0 | 1 | 0 | python,tkinter,main | 44,706,671 | 1 | true | 0 | 1 | Will mainloop() continually update while it waits for another button press, or will I essentially have to creatre an update method that cycles through everything until another button is pressed?
Not all. That's why you are using tk.Tk().mainloop(). tkinter does this for you. All you are expected to do is implement the functionality that should happen when your button is pressed. tkinter will listen for the button press. | 1 | 2 | 0 | So I was wondering if someone would be able to help shed a little light for me on something I am working on in Python.
I am creating a program with a Tkinter GUI interface that interacts with a Serial device, and an ADC chip to measure voltage. I want to make sure I properly understand how I'm building the main program loop to keep everything running smoothly. I'm going to lay out how I think the program should run, if anyone has any corrections please throw them at me.
Program is run, GUI Interface initializes
User presses a button
send signal of button through serial
measure/display voltage levels
periodically update voltage display
if button is pressed, return to step 3
Now I know to run my Tkinter GUI I set up mainloop() as the last line of code. Now my question is simply, is that all I will need? Will mainloop() continually update while it waits for another button press, or will I essentially have to creatre an update method that cycles through everything until another button is pressed? | Want Clarification for Program Loop (Python) | 1.2 | 0 | 0 | 60 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.