Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
43,522,463 | 2017-04-20T14:31:00.000 | 3 | 0 | 0 | 0 | python,pygame,collision | 43,522,823 | 1 | false | 0 | 1 | Never mind, I found the answer. if rect1.rect.bottom >= rect2.rect.top and rect1.rect.bottom <= rect2.rect.bottom: | 1 | 2 | 0 | How can I find if two rects collide on a certain side? (e.g. rect1.rect.top, rect2.rect.bottom) I've tried rect1.rect.colliderect(rect2) and pygame.sprite.collide_rect(rect1, rect2), but they don't find the individual side collisions. | Pygame - Rect collision by side | 0.53705 | 0 | 0 | 1,087 |
43,523,750 | 2017-04-20T15:23:00.000 | 1 | 1 | 0 | 0 | python,angularjs,angular,caching,browser-cache | 43,525,479 | 2 | false | 1 | 0 | Are you referring to development time? If so, this can often be set in the browser. Which browser are you using? For Chrome:
Open the developer tools.
Select the network tab.
Check the Disable cache checkbox | 1 | 0 | 0 | working on a website built with python and angular but after every single change I need to hard reset browser to see it. I see this is a problem in general so what's the best approach for caching js and css resources ? right now I dont care if each file will be called on every single page if there is no other option. | angular/python - browser caching issue on each change | 0.099668 | 0 | 0 | 129 |
43,525,447 | 2017-04-20T16:50:00.000 | 0 | 0 | 1 | 0 | python,pypy | 43,530,861 | 1 | false | 0 | 0 | Try running the pandas code with PyPy. Compatibility is rather good these days. | 1 | 0 | 0 | Is there any feature in pypy that allows for a standard CPython 2.7 interpreter to be run for a designated section of code? I have a function that has pandas code within it (its a performance intensive function, benefiting greatly by pandas), all references to pandas are contained within that function.
Obviously pypy can't interpret pandas code due to pandas' C-bound nature. Is there a way that I can "switch over" to a standard interpreter just for this function? The codebase as a whole greatly benefits from a pypy interpreter. | Switch from pypy to CPython for sections of code | 0 | 0 | 0 | 114 |
43,526,551 | 2017-04-20T17:52:00.000 | 1 | 0 | 1 | 0 | python-sphinx | 64,704,941 | 2 | false | 1 | 0 | I just came across this question, and can answer for Sphinx 3.3 with Python 3.8.
All I had to do was put html_extra_path = ['extra'] into conf.py. Then I made docs/extra/snippets. Now snippets and all of its content are copied into _build/html/snippets. | 1 | 1 | 0 | I have the following directory structure for my Sphinx project.
root
build
source
index.md
snippets
I want to copy root/source/snippets to build/html/snippets as is; this directory has code snippet files.
I don't think .. include::, :download: and html_extra_path are what I need. Is there a way to do this using the build configuration? Or is the only way to modify the make build scripts.
I am using Python v2.7.13 and Sphinx v1.5.1. | How do I include a subdirectory and its content to the Sphinx output/build directory? | 0.099668 | 0 | 0 | 596 |
43,528,001 | 2017-04-20T19:12:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,factorial,design-by-contract,post-conditions | 43,528,347 | 3 | false | 0 | 0 | The definition "A requirement that should be satisfied by the function before it ends." is correct.
Consider the function create_logger(file_name) it takes a string parameter and file_name, returns a file stream for the given file_name which can be used to write log messages.
In this case post condition is that returned object be a writable file stream (main objective of the function).
Additionally it might also ensure that is clears any previous file with identical name (house-keeping/clean-up activity).
And there is enough space/permission to write to newly created file (sanity check).
Post condition can be created for both the main objective and sanity checks. which will become true at some point during the execution of the function and before it returns.
However it is not necessary that these condition remain True after the function returns. Hence "A requirement that should be satisfied by the function after it ends." is incorrect.
For example at some later point after the function return the disk might fill up and returned file stream object may not be writable any more. | 2 | 1 | 0 | In "Think Python: How to Think Like a Computer Scientist", the author defines postcondition as:
A requirement that should be satisfied by the function before it ends.
He also states:
Conversely, conditions at the end of the function are postconditions.
Postconditions include the intended effect of the function (like
drawing line segments) and any side effects (like moving the Turtle or
making other changes).
So assume that we have a function called factorial that has a required parameter called n, isn't the expected postcondition of it that it must (i.e it is required to) return a positive integer that represents the product of numbers from 1 through n? Isn't this requirement satisfied after factorial ends?
Is this definition right?
Would defining postcondition as "A requirement that should be satisfied by the function after it ends." be right?
Note: I'm a beginner in programming, in general, and Python, in particular. | Is "A requirement that should be satisfied by the function before it ends." a right definition for postcondition, in Python? | 0 | 0 | 0 | 114 |
43,528,001 | 2017-04-20T19:12:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,factorial,design-by-contract,post-conditions | 43,528,122 | 3 | false | 0 | 0 | A postcondition is "a requirement that must be true at the moment a function ends", i.e.: At the exact moment the function ends, and nothing more has happened, the postcondition of the function must be true.
The definition in your book is actually somewhat consistent with this: If the postcondition is satisfied by the function before it ends, and the function doesn't do anything that would render the condition false, then of course the postcondition will be true at the moment the function ends.
Your definition is also consistent with this, in that right after the function ends, its postcondition must be true.
I think the main issue here is the definition of the word "satisfy". If we take "to satisfy a condition" to mean "to make that condition true" (which seems to be the definition your book uses) then a postcondition must become true at some point while the function runs and before it returns so that it may be true at the moment the function's execution ends. If you take "satisfy" to mean "to have the condition be true" (which seems to be how you are using the word), then your definiton makes sense - immediately after the function ends, its postcondition must be true.
Semantics! | 2 | 1 | 0 | In "Think Python: How to Think Like a Computer Scientist", the author defines postcondition as:
A requirement that should be satisfied by the function before it ends.
He also states:
Conversely, conditions at the end of the function are postconditions.
Postconditions include the intended effect of the function (like
drawing line segments) and any side effects (like moving the Turtle or
making other changes).
So assume that we have a function called factorial that has a required parameter called n, isn't the expected postcondition of it that it must (i.e it is required to) return a positive integer that represents the product of numbers from 1 through n? Isn't this requirement satisfied after factorial ends?
Is this definition right?
Would defining postcondition as "A requirement that should be satisfied by the function after it ends." be right?
Note: I'm a beginner in programming, in general, and Python, in particular. | Is "A requirement that should be satisfied by the function before it ends." a right definition for postcondition, in Python? | 0.066568 | 0 | 0 | 114 |
43,531,607 | 2017-04-20T23:40:00.000 | 1 | 1 | 0 | 0 | python,numpy,hash | 43,531,728 | 1 | true | 0 | 0 | (x + y) % z == ((x % z) + (y % z)) % z. So you could take the modulus before doing the sum:
Cast a and x to uint64. (Multiply two uint32 will never overflow uint64).
Compute h = (a * x) % p + b
Return (h - p) if h > p else h. (Alternatively: return h % p) | 1 | 1 | 1 | For one of my pet projects I would like to create an arbitrary number of different integer hashes. After a bit of research I figured that universal hashing is the way to go. However, I struggle with the (numpy) implementation. Say I'm trying to use the hash family h(a,b)(x) = ((a * x + b) mod p) mod m and my x can be anywhere in the range of uint32. Then choosing p >= max(x) and a, b < p means that in the worst case a * x + b overflows not only uint32 but also uint64. I tried to find an implementation which solves this problem but only found clever ways to speed up the modulo operation and nothing about overflow.
Any suggestions for how to implement this are greatly appreciated. Thanks :-) | Integer overflow in universal hashing implementation | 1.2 | 0 | 0 | 189 |
43,531,951 | 2017-04-21T00:21:00.000 | 2 | 0 | 0 | 0 | python,machine-learning | 43,532,000 | 1 | true | 0 | 0 | The lack of negative examples does not make this unary classification; there is no such modelling, as one-class has no discrimination, and therefore derives no new information from the data set.
As you've pointed out, there are two classes: Legitimate and not. That's binary. Use any binary classifier from your research that's capable of deriving boundaries from positive data only. For instance, so-called "one-class" SVM is one such classifier. | 1 | 0 | 1 | I'm new to machine learning and looking to do run a training / testing dataset through a few classifiers, but the problem I'm having is that I only have one label for my data (Legitimate, currently set as an int so 1 for legit, 0 for not). Ideally I'm looking for a classifier that is going to run with just one label and either confirm or deny if something falls into that label, without the need to specify a second label or class.
Any help would be greatly appreciated!
Many thanks. | Python Machine Learning classifier for just one label | 1.2 | 0 | 0 | 438 |
43,534,537 | 2017-04-21T05:24:00.000 | 0 | 1 | 1 | 0 | python | 43,534,609 | 2 | true | 0 | 0 | Rename the python interpreter executables to their respective versions. The OS is just executing the first 'python' executable it finds in the path, which is probably the 3.x version. So in command line, you can type python2 or python3 to select the version of interpreter you want. | 1 | 0 | 0 | I'm doing a learn python the hardway tutorial, and they are using python2.7
I got it downloaded but unable to switch back from 3.3 to 2.7
I manipulated PATH variable, adding C:\Python27 but this was no use
any other suggestion? | i have both python2 and 3, I want to use python2 but on powershell I'm using python3 | 1.2 | 0 | 0 | 120 |
43,535,202 | 2017-04-21T06:12:00.000 | 4 | 0 | 1 | 0 | python,anaconda | 62,217,456 | 4 | false | 0 | 0 | Anaconda distribution has been on my computer for last 2 years, on & off, so I feel that I have some experience using it.
Anaconda tries to be a Swiss army knife, and the fact remains, everything that is available with anaconda, can be manually installed using PIP.
If you're a beginner, and don't intend to do some comprehensive stuff in data science/ML field, I don't see any reason that you will need to install Anaconda. If you still want to have conda on your machine, go for it, but if you have python pre-installed, remove it first, and then use conda. (Otherwise you'll have to be specific and observant of where is it that the new python packages being installed on your computer.)
Conda dist. usually occupies 2-4 GB of space very easily.(There is a light installer known as miniconda, but it too goes on to consume memory considerably)
When you use conda command to install a python package, it usually pulls additional (maybe unnecessary for a beginner) packages along with it, thus consuming more & more space on your device. So, if your machine is slow and you have less space, Anaconda is a big NO-NO for you.
Anaconda (IMHO) is a finely tuned hype in the internet space of beginner python users.
And even if you have sufficient memory and a capable device, I don't find why should you spend that for things that you may never use. Unless you have a significant benefit when doing so, which could be more pronounced for those in a professional environment.
There are ways to bulk install everything you need using PIP, And PIP only installs what we demand/command from the terminal, nothing additional stuff, unless we ask for it.
Also, keep in mind, if you want to do data science, ML, Deep learning things, go for 64-bit version of python, so that every module you need can be installed without countering errors. | 2 | 14 | 0 | Recently I have started programming in Python (Python 3.5) on my Linux OS. But I am confused about Anaconda. What is it actually? Is it a version of Python or something else? If I do not install Anaconda will there be any limitations? | Confusion between Python and Anaconda | 0.197375 | 0 | 0 | 31,945 |
43,535,202 | 2017-04-21T06:12:00.000 | 1 | 0 | 1 | 0 | python,anaconda | 69,930,325 | 4 | false | 0 | 0 | Anaconda is nothing but a python and R distribution. If you are working on Machine learning or data science field, tou will find anaconda very useful. So installing anaconda will also install python, conda(which is a package manager in anaconda), a lot of third party python packages, an IDE(like spyder), jupyter notebook(which is very helpful to write codes and visualise results and run codes cell by cell) . However, if tou are just a beginner, installing only python would be enough. Python will have certain standard libraries that will be installed along with it. And when u need new packages, you can use pip to install them.
P.s. if you have low memory space and u are just beginning, anaconda is a no no as it will have many packages installed by default, which u might not use. But installing python requires less memory and when u need a third party library, u can use pip to install libraries. | 2 | 14 | 0 | Recently I have started programming in Python (Python 3.5) on my Linux OS. But I am confused about Anaconda. What is it actually? Is it a version of Python or something else? If I do not install Anaconda will there be any limitations? | Confusion between Python and Anaconda | 0.049958 | 0 | 0 | 31,945 |
43,538,158 | 2017-04-21T08:53:00.000 | 1 | 0 | 1 | 0 | python-2.7,matplotlib | 54,817,932 | 2 | false | 0 | 0 | If you use plt.ion() instead of plt.show() the script will continue to run without you needing to close the window and the plot will update in the open window.
Alternatively, you can use plt.show(block=False) | 2 | 1 | 0 | I have a Python Script which is producing a plot, everything works fine. But I would like to keep this plot open (to compare it with others), this doesn't work, the script doesn't continue to run until I have closed the plot window. Any suggestions what I can do? Thanks :) | Python Script requires plot to close before continuing | 0.099668 | 0 | 0 | 2,218 |
43,538,158 | 2017-04-21T08:53:00.000 | 0 | 0 | 1 | 0 | python-2.7,matplotlib | 43,539,133 | 2 | true | 0 | 0 | When calling plt.show() the event loop is transfered to the window shown. The rest of the program will only continue once that window is closed.
The suggestion would be to simply call plt.show() at the end of the script, when all figures are ready to be shown at once. | 2 | 1 | 0 | I have a Python Script which is producing a plot, everything works fine. But I would like to keep this plot open (to compare it with others), this doesn't work, the script doesn't continue to run until I have closed the plot window. Any suggestions what I can do? Thanks :) | Python Script requires plot to close before continuing | 1.2 | 0 | 0 | 2,218 |
43,542,157 | 2017-04-21T11:58:00.000 | 0 | 0 | 1 | 0 | python,dll | 44,690,127 | 3 | false | 0 | 0 | Check your environment variable. i think the PYTHONHOME variable may be pointing to the wrong directory | 3 | 1 | 0 | I am trying to make a .sln file for Visual Studio and in that process I am facing a problem
File "socket.py", line 47, in
import _socket
ImportError: DLL load failed: The specified module could not be found.
This socket.py is present in Python27/Lib folder.
I have checked there is no other version of python installed which is clashing with Python27. | File "socket.py", line 47, in import _socket ImportError: DLL load failed: The specified module could not be found | 0 | 0 | 1 | 1,145 |
43,542,157 | 2017-04-21T11:58:00.000 | 0 | 0 | 1 | 0 | python,dll | 53,442,821 | 3 | false | 0 | 0 | These kinds of problem generally happens when you have multiple virtual environments of venv available in your system.
Check in the preferences of the visual studio / Any other IDE setting, they generally point to a particular venv .
Change it to point to the venv where this module is installed and then it work
Hope it helps
Thanks | 3 | 1 | 0 | I am trying to make a .sln file for Visual Studio and in that process I am facing a problem
File "socket.py", line 47, in
import _socket
ImportError: DLL load failed: The specified module could not be found.
This socket.py is present in Python27/Lib folder.
I have checked there is no other version of python installed which is clashing with Python27. | File "socket.py", line 47, in import _socket ImportError: DLL load failed: The specified module could not be found | 0 | 0 | 1 | 1,145 |
43,542,157 | 2017-04-21T11:58:00.000 | 0 | 0 | 1 | 0 | python,dll | 67,999,990 | 3 | false | 0 | 0 | If the error is import _socket failed then the file _socket was not installed or it was deleted on mistake, I had the same problem and reinstalling python made stuff fine. As for _socket, its a .pyd file which will have some C code used by socket to code a class. If you did not understand this, open python IDLE and press alt and m together, then type socket, hit enter and the source code will open, scroll down until you see the code starting and you'll find the line import _socket. | 3 | 1 | 0 | I am trying to make a .sln file for Visual Studio and in that process I am facing a problem
File "socket.py", line 47, in
import _socket
ImportError: DLL load failed: The specified module could not be found.
This socket.py is present in Python27/Lib folder.
I have checked there is no other version of python installed which is clashing with Python27. | File "socket.py", line 47, in import _socket ImportError: DLL load failed: The specified module could not be found | 0 | 0 | 1 | 1,145 |
43,543,469 | 2017-04-21T13:03:00.000 | 2 | 0 | 0 | 0 | python,superset | 43,544,802 | 1 | false | 0 | 0 | I found a possibility to create a fast filter.
A table 'fcountry' is created, where all countries (DE,FR, etc.) are stored. This table is used to create the filter widget, which is added to the Dashboard.
However, I'm still looking for a handy solution for EU without DE (where Country!='DE'). At the Moment I need to select all the countries except DE. | 1 | 1 | 1 | I have a test data table (called "eu") with around 5 millions Observations, where each Observation belongs a country in the EU. Now I'd like to implement a country filter on my dashboard for further Analysis.
There already exists a filter widget, which does a query on my data set "eu" to get distinct Countries. However, I already know the Country names for the filter and I could skip (if possible) the query. Is there a possibility to set a filter without a query?
The filter would look like:
DE (where Country='DE')
FR (where Country='FR')
etc.
EU without DE (where Country!='DE')
Regards | Dashboard Filter Superset | 0.379949 | 0 | 0 | 1,424 |
43,545,427 | 2017-04-21T14:36:00.000 | 1 | 0 | 1 | 1 | python,windows,cython | 43,545,428 | 1 | true | 0 | 0 | This was caused by the Avira antivirus. Disabling its real-time protection fixed the problem. I eventually replaced it with Avast, which so far hasn't given me any trouble. | 1 | 1 | 0 | I'm developing a native Python module (DLL or PYD) on Windows using Cython. Every time I rebuild it, the first time it's loaded blocks for 15 seconds, during which time the CPU and disk are completely idle. Subsequent attempts run normally, until I rebuild the module again.
This happens with both the Cygwin and MSYS2 builds of Python. | 15-second idle delay loading Windows native Python module | 1.2 | 0 | 0 | 50 |
43,546,529 | 2017-04-21T15:25:00.000 | 3 | 0 | 1 | 0 | python,python-3.x | 43,546,641 | 3 | false | 0 | 0 | if you want some kind of state persistance then your options are limited:
save the state into a file as you suggest in your question (either a text file or spreadsheet, but spreadsheet is harder to do)
change your concept so that instead of "running the script" multiple times, the script is always running, but you give it some kind of signal (keyboard input, GUI with a button etc) to let it know to increment the counter
split your script into two halves, a server script and a client. the server would listen for connections from the client, and keep track of the current count, the client would then connect and tell the server to increment the count, if needed it could also send the previous or new count back to the client for some kind of output. this would prevent having many writes to disk, but the count would be lost if the server process is closed. | 1 | 3 | 0 | I am trying to write a script that will have a count that starts a 001, and increasing by one every time that the script is run.
I just help some help started this off, what can I do to set it up so that it knows where it start from every time? Is there is way that I can build it into the script to do this?
My bad ideas about how to do this so far:
- Have the number(001) exported to a text file, and have the script change that number at the end of every script (001 +1).
This number will also be in a spreadsheet, so have the script read to value from the spreadsheet, and add one to that value.
I feel like there has to be an easier way, and I'd prefer a way that was self-contained within the script. Can someone help point me in the right direction?
Thanks for your input. | Python script to input the next number in a sequence every time it runs. | 0.197375 | 0 | 0 | 545 |
43,546,753 | 2017-04-21T15:37:00.000 | 0 | 0 | 1 | 0 | python,windows-7,anaconda | 47,446,012 | 1 | false | 0 | 0 | I finally got to my limit when a colleague imposed the use of 32-bit Python and I had to start switching back and forth between 32-bit and 64-bit frequently.
The way I solved this - which might not be very good, but it works - is to write a one-line Windows .bat file:
set PATH=%USERPROFILE%\AppData\Local\Continuum\miniconda2;%USERPROFILE%\AppData\Local\Continuum\miniconda2\Scripts;%USERPROFILE%\AppData\Local\Continuum\miniconda2\Library\bin
Add all the standard, unchanging stuff you have in the PATH variable to that line as well, or else it'll be cleared.
Then to switch back and forth quickly I just run version.bat from the command prompt. Pretty simple, really. | 1 | 0 | 0 | I'm using Anaconda to manage libraries for Python 3.
I have a team I work with using a network drive, but running scripts from the network drive is often very slow, so I'd like a local installation of Python as well.
I've got Anaconda installed on my local machine, but when I use conda list in the command prompt I still get a message that says 'packages in environment at' and then my network drive installation location.
I have a feeling this has to do with setting the PATH variable. How can I switch where Anaconda (and Python) is sourced from? | Changing python source from network to local drive | 0 | 0 | 0 | 53 |
43,547,685 | 2017-04-21T16:25:00.000 | 1 | 0 | 1 | 0 | python,file-handling | 43,547,860 | 4 | false | 0 | 0 | One does this sort of thing in audio coding lots, where files can be huge. The normal way as I understand it is just to have a memory buffer and do it in two stages: read a blob of arbitrary size into buffer (4096 or whatever), then stream characters from the buffer, reacting to the line endings. Because the buffer is in ram, streaming character by character out of it is fast. I'm not sure what data structure or call would be best to do it with in Python though, I've actually only done this in C, where it's just a block of ram. But the same approach should work. | 1 | 2 | 0 | I have a large text file(more than my RAM) and I need to use each line in it for further processing. But if I read say like 4096 bytes at a time I'm worried about splitting the line somewhere in between. How do i proceed? | I need to split a very large text file | 0.049958 | 0 | 0 | 840 |
43,549,448 | 2017-04-21T18:13:00.000 | 0 | 0 | 0 | 0 | python,image,pygame | 43,549,524 | 2 | false | 0 | 1 | You can also query the size of the image. Adjust the corner coordinates by half of the size in each direction. | 1 | 0 | 0 | I am trying to move an image from its center using pygame. I loaded the image using pygame but I have the top corner coordinates of the image. how do i get the coordinates of center. | Finding the position of center using pygame | 0 | 0 | 0 | 1,167 |
43,551,724 | 2017-04-21T20:45:00.000 | 1 | 0 | 0 | 0 | python,codeskulptor | 65,641,998 | 2 | false | 0 | 1 | Use exit(). It's compatible with both Codeskulptor and normal Python so you can implement it in other programs in and out of Codeskulptor too. | 1 | 0 | 0 | I am making a pong game in code skulptor, i have created an exit game button, but i need some sort of function to exit the whole entire game. The pong game is complete and this is the only feature left, thanks for all the help in advance as i'm rather new to python :) | How to exit a simplegui frame in codeskulptor? | 0.099668 | 0 | 0 | 513 |
43,552,146 | 2017-04-21T21:15:00.000 | 0 | 0 | 0 | 0 | python,html,beautifulsoup,urllib | 43,559,174 | 3 | false | 1 | 0 | Your problem might be that the page elements are Dynamic. (Revealed by JavaScript for example)
Why is this a problem? A: You can't access those tags or data. You'll have to use either a headless/Automated browser ( Learn more about selenium ).
Then make a session through selenium and keep feeding the data the way you wanted to the Arduino.
Summary: If you inspect elements you can see the tag, if you go to view source you cant see it. This can't be solved using bs4 or requests alone. You'll have to use a module called Selenium or something similar. | 1 | 2 | 0 | Basically I'm using python to send serial data to an arduino so that I can make moving dials using data from the game. This would work because you can use the url "localhost:8111" to give you a list of these stats when ingame. The problem is I'm using urllib and BeautifulSoup but they seem to be blindly reading the source code not giving the data I need.
The data I need comes up when I inspect the element of that page. Other pages seem to suggest that using something to run the HTML in python would fix this but I have found no way of doing this. Any help here would be great thanks. | Trying to read data from War Thunder local host with python | 0 | 0 | 1 | 853 |
43,553,221 | 2017-04-21T23:00:00.000 | 0 | 0 | 1 | 0 | python,xen | 43,553,275 | 1 | false | 0 | 0 | What about writing a simple txt file each time a new backup is made?
Could be something like this: backup_ddmmyy_h_m.txt in an backup_cache directory?
And then before you make a new backup you simply check if you have 10 backup txt files, and delete the oldest one. | 1 | 0 | 0 | Ok, so I wrote a python program which creates backups of virtual machines on a virtual machine server and saves them onto an NFS. I want to make it so that only the most recent 10 backups are saved. So after 10 backups, start over writing the first, second, third, etc. What is the best approach for this? All i could think of is to have a text file which contains all the log information and a current state. Is there a better route? This is for Xen Server which uses python. thanks | Best way to overwrite files after a specific number are in a directory in python | 0 | 0 | 0 | 51 |
43,557,522 | 2017-04-22T09:26:00.000 | 3 | 0 | 1 | 0 | python,python-2.7,python-3.x,pip | 43,558,655 | 2 | false | 0 | 0 | Try pip2 instead of pip.For Example:
Pip2 install .... | 1 | 0 | 0 | I have Linux mint 18.x installed. When i ran pip initially it installed packages to python 2.7.x. I also installed pip3 and it handled python3 package install. But after I followed some instructions for other reasons and did apt-get update / upgrade, pip now installs to python3 and not 2.7.x. How can I reset please as I use both. Is it a matter of rerunning:
sudo python pip.py? | pip now installs to python3 not python 2.7.x | 0.291313 | 0 | 0 | 1,471 |
43,557,881 | 2017-04-22T10:03:00.000 | 0 | 0 | 1 | 0 | python,anaconda,keras,jupyter-notebook | 43,576,690 | 9 | false | 0 | 0 | I have realized that I had two different Jupyter's directories, so I have manually deleted one on them. Finally, I have reinstalled Anaconda. Now Keras works properly. | 5 | 9 | 1 | I have installed Tensorflow and Keras by Anaconda (on Windows 10), I have created an environment where I am using Python 3.5.2 (the original one in Anaconda was Python 3.6).
When I try to execute import keras as ks, I get ModuleNotFoundError: No module named 'keras'.
I have tried to solve this issue by sys.path.append(C:\\Users\\ ... \\Anaconda3\\python.exe)
with both notebook and console, but I continue to get the same error.
How could I solve this issue? | Jupyter can't find keras' module | 0 | 0 | 0 | 27,759 |
43,557,881 | 2017-04-22T10:03:00.000 | 1 | 0 | 1 | 0 | python,anaconda,keras,jupyter-notebook | 43,557,975 | 9 | false | 0 | 0 | (Not an answer but some troubleshooting hints)
sys.path is not the path to your Python executable, but the path to the libraries.
Check where Keras is installed and check your sys.path. How exactly did you install Keras?
Try to execute the same command from the Python interpreter. Do you have the same issue?
How did you install Jupiter, is the sys.path visible from there the same as sys.path visible from your Python interpreter?
Do Jupiter and Keras use the same version of Python?
You might try to uninstall Jupiter and install it again, and hope that the new installation picks up the packages which are already installed. What could happen is that you have more than one Python installation and different libraries being installed to the different places. sys.path, when requested from different environments, might give you a hint if that's true. | 5 | 9 | 1 | I have installed Tensorflow and Keras by Anaconda (on Windows 10), I have created an environment where I am using Python 3.5.2 (the original one in Anaconda was Python 3.6).
When I try to execute import keras as ks, I get ModuleNotFoundError: No module named 'keras'.
I have tried to solve this issue by sys.path.append(C:\\Users\\ ... \\Anaconda3\\python.exe)
with both notebook and console, but I continue to get the same error.
How could I solve this issue? | Jupyter can't find keras' module | 0.022219 | 0 | 0 | 27,759 |
43,557,881 | 2017-04-22T10:03:00.000 | 0 | 0 | 1 | 0 | python,anaconda,keras,jupyter-notebook | 60,808,850 | 9 | false | 0 | 0 | Here's how I solved this problem.
First, the diagnosis. When I run which python in a terminal window on my Mac (the same terminal I used to launch jupyter, I get /Users/myusername/.conda/envs/myenvname/bin/python, but when I run the same command from a terminal within Jupyter, I get /usr/bin/python. So Jupyter isn't using the correct python executable; the version it's using doesn't have any of my packages installed.
But which jupyter returns /usr/bin/jupyter; it's using a version of jupyter that isn't coming from within my conda environment. I ran conda install jupyter and now which jupyter returns /Users/myusername/.conda/envs/myenvname/bin/jupyter (for some reason I had to restart the terminal window for this to take effect.) Then if I relaunch jupyter notebook, the notebook is using the correct version of Python and I have access to all my installed conda packages. | 5 | 9 | 1 | I have installed Tensorflow and Keras by Anaconda (on Windows 10), I have created an environment where I am using Python 3.5.2 (the original one in Anaconda was Python 3.6).
When I try to execute import keras as ks, I get ModuleNotFoundError: No module named 'keras'.
I have tried to solve this issue by sys.path.append(C:\\Users\\ ... \\Anaconda3\\python.exe)
with both notebook and console, but I continue to get the same error.
How could I solve this issue? | Jupyter can't find keras' module | 0 | 0 | 0 | 27,759 |
43,557,881 | 2017-04-22T10:03:00.000 | 0 | 0 | 1 | 0 | python,anaconda,keras,jupyter-notebook | 51,338,496 | 9 | false | 0 | 0 | If you are a windows/mac user who are working on Jupyter notebook “pip install keras” doesn't help you .Try the below steps.It was solved for me
1. In command prompt navigate to the “site packages” directory of your anaconda installed.
2. Now use “conda install tensorflow” and after “conda install keras”
3. Re-start your Jupyter notebook and run the packages. | 5 | 9 | 1 | I have installed Tensorflow and Keras by Anaconda (on Windows 10), I have created an environment where I am using Python 3.5.2 (the original one in Anaconda was Python 3.6).
When I try to execute import keras as ks, I get ModuleNotFoundError: No module named 'keras'.
I have tried to solve this issue by sys.path.append(C:\\Users\\ ... \\Anaconda3\\python.exe)
with both notebook and console, but I continue to get the same error.
How could I solve this issue? | Jupyter can't find keras' module | 0 | 0 | 0 | 27,759 |
43,557,881 | 2017-04-22T10:03:00.000 | 0 | 0 | 1 | 0 | python,anaconda,keras,jupyter-notebook | 56,765,580 | 9 | false | 0 | 0 | Acually, I did this command pip install keras and sudo -H pip3 install keras and pip3 install keras. None of them worked. I added the following command and everything worked like a charm:
pip install Keras. Yes a capital 'K' | 5 | 9 | 1 | I have installed Tensorflow and Keras by Anaconda (on Windows 10), I have created an environment where I am using Python 3.5.2 (the original one in Anaconda was Python 3.6).
When I try to execute import keras as ks, I get ModuleNotFoundError: No module named 'keras'.
I have tried to solve this issue by sys.path.append(C:\\Users\\ ... \\Anaconda3\\python.exe)
with both notebook and console, but I continue to get the same error.
How could I solve this issue? | Jupyter can't find keras' module | 0 | 0 | 0 | 27,759 |
43,557,926 | 2017-04-22T10:08:00.000 | 0 | 0 | 0 | 0 | python,mysql,flask | 43,557,984 | 4 | false | 1 | 0 | Add config file main file and set set 'charset' => 'utf8mb4'
you have to edit field in which you want to store emoji and set collation as utf8mb4_unicode_ci | 1 | 4 | 0 | I'm using Python 2.7 and flask framework with flask-sqlalchemy module.
I always get the following exception when trying to insert : Exception Type: OperationalError. Exception Value: (1366, "Incorrect string value: \xF09...
I already set MySQL database, table and corresponding column to utf8mb4_general_ci and I can insert emoji string using terminal.
Flask's app config already contains app.config['MYSQL_DATABASE_CHARSET'] = 'utf8mb4', however it doesn't help at all and I still get the exception.
Any help is appreciated | Flask SQLAlchemy can't insert emoji to MySQL | 0 | 1 | 0 | 1,485 |
43,558,763 | 2017-04-22T11:40:00.000 | 1 | 0 | 0 | 1 | python,macos | 43,718,280 | 2 | false | 0 | 0 | Have you tried to do a which python to see if the actual Python version used is the one installed through brew(I assume you did a brew install pythonbecause of the path under /usr/local)?
If the Python executable is not the one under /usr/local then you might be in trouble, take into account that installing through brew won't replace default system Python. | 1 | 1 | 0 | Good morning, I'm playing with launch daemons on my Mac running OSX El Capitan. I've made the script in Python that I would like to run when my machine boots (it should snap a picture through the webcam and save it to a directory I specify). I've made the appropriate plist, booted into recovery mode to disable csrutil, and then added the plist to /System/Library/LaunchDaemons. Upon reboot, I do not see any pictures (nor does the green webcam light turn on).
I checked the error log for the script and found that the python script throws an error that it cannot import CV2 (ImportError: no module named cv2). However, I do have cv2 installed and it works once the system is booted. My script seems to be able to load other modules (os, datetime, and time) as they are imported before cv2.
Is this an additional security feature? Is there a way to work around this? If there is a workaround, will it work even when csrutil is enabled? I don't want to be running around with that disabled, I just disable it to make the necessary changes to the LaunchDaemons directory, and re-enable it after. I did reboot with csrutil disabled and still received the import error, so it doesn't seem to be that (at least as far as I can tell).
Thanks!
Edit: Some more googling led me to discover that the python path specified in the plist for my daemon was not the one with which openCV was associated. However, a quick echo $PYTHONPATH gives me /usr/local/lib/python2.7/site-packages, which when put in the plist no longer gives an error on startup, but now doesn't seem to execute at all.
Also, I've tried changing the directory I write to be /tmp/ since all users have access to that, but still to no avail. | Importing modules with a Python launch daemon (OSX) | 0.099668 | 0 | 0 | 197 |
43,563,128 | 2017-04-22T18:37:00.000 | 0 | 0 | 0 | 1 | python,mapreduce,hadoop2 | 43,590,222 | 1 | false | 0 | 0 | If you are running mapreduce in local mode (e.g., from eclipse), it will only run one mapper and one reducer at a time. If you are running it in distributed (or pseudo-distributed) mode (e.g., using the hadoop command from the terminal, it can run with more mappers.
Make sure to set the max number of mappers to more than 1 in the configuration files.
If you have 4 files, your Mac has at least 4 cores, then you should see at least 4 map tasks running simultaneously. | 1 | 1 | 0 | I am using Hadoop 2.8.0 in my Mac. I want to run all the mappers simultaneously. I tried by forcing to make more than one split of input file and using more than one input files, so that multiple mappers are created. They are created, but they run sequentially. I see in the output something like this:
starting task ****_m_0
...............
finising task ****_m_0
starting task ****_m_1
Why mappers run one after another? how can I configure so that they start at once? | how to run multiple mappers in single node simultaneously | 0 | 0 | 0 | 263 |
43,563,447 | 2017-04-22T19:05:00.000 | 1 | 0 | 1 | 0 | python,json,r,bigdata | 43,563,552 | 1 | false | 0 | 0 | The jsonlite R package supports streaming your data. In that way there is no need to read all the json data into memory. See the documentation of jsonlite for more details, the stream_in function in particular.
Alternatively:
I would dump the json into a mongo database and process the data from that. You need to install mongodb, and start running mongod. After that you can use mongoimport to import the json file into the database.
After that, you can use the mongolite package to read data from the database. | 1 | 1 | 1 | I was trying to do some exploratory analyses on a large (2.7 GB) JSON dataset using R, however, the file doesn't even load in the first place. When looking for solutions, I saw that I could process the data in smaller chunks, namely by iterating through the larger file or by down-sampling it. But I'm not really sure how to do that with a JSON dataset. I also thought of converting the original JSON data into .csv, but after having a look around that option didn't look that helpful.
Any ideas here? | How to iterate/ loop through a large (>2GB) JSON dataset in R/ Python? | 0.197375 | 0 | 0 | 609 |
43,565,014 | 2017-04-22T21:49:00.000 | 2 | 0 | 0 | 0 | python,html,django,dynamic | 43,565,086 | 4 | false | 1 | 0 | Django is a server side framework. So it has little to do with HTML.
Django will give you easier/standardized ways to handle HTTP requests, and to manipulate entries in the database, among other things.
HTML5 alone doesn't enable dynamic web-pages. You can have interactive web pages, but they will always be the same, for every user, whenever you access it. | 4 | 0 | 0 | Essentially, my questions are as stated above in the title. What I'm really seeking to know is why it would be privy of me to build a web-page utilizing the Django framework as opposed to solely building it out with HTML5 and CSS3.
I do know that Django utilizes bootstrapping of HTML5 and CSS and this is where my questions are raised over the efficiency of using the Django framework as opposed to solely using HTML5/CSS3.
1.) What are the advantages of Django?
2.) What does utilizing the Django framework offer me that HTML5/CSS3 do not?
3.) HTML5 can also build dynamic web-pages as can Django. Why would Django be better for a dynamic web-page?
I am looking for a very valid answer here as I am about to start building my web-page. The responses I get to these questions will be the nail in the coffin for which method I will be using to build the web-page. Thanks ladies and gentleman and I hope you find this question to be worth your while in answering. | What Are The Advantages of Building a Web-Page with Django as Opposed to Solely Using HTML5/CSS3? | 0.099668 | 0 | 0 | 1,470 |
43,565,014 | 2017-04-22T21:49:00.000 | 1 | 0 | 0 | 0 | python,html,django,dynamic | 43,565,218 | 4 | false | 1 | 0 | Django is a python web application framework that allows you to send requests from your page to a server that will in turn provide a response back to your web page.
Advantages: The power of Django is the ability to quickly get both the client ( your page ) and the backend ( the server-side logic ) setup. The backend can include writing to a database, processing information, retrieving information which is subsequently a response delivered to your web page.
HTML5/CSS3 is markup languages for your web page. You can use a editors like sublime or even notepad ++ if you are building a static web page. Django, like most web app frameworks, are used because of what I've described in #1 ( and many other unlisted reasons ).
HTML5 provides the ability to make dynamic web pages ( using a client side library like JQuery as an embedded script ), Django helps you build web apps. You can write a web page using only HTML5 and JQuery to display list of tv shows that are currently on ABC by listing what is currently playing today, but what about for tomorrow? You need server-side help by creating response that will fetch all shows for tomorrow by calling the ABC API. Take a look at server-side logic and web applications.
In short, there are web pages and web applications. Sounds like to me you are building the former, so Django might be overkill. | 4 | 0 | 0 | Essentially, my questions are as stated above in the title. What I'm really seeking to know is why it would be privy of me to build a web-page utilizing the Django framework as opposed to solely building it out with HTML5 and CSS3.
I do know that Django utilizes bootstrapping of HTML5 and CSS and this is where my questions are raised over the efficiency of using the Django framework as opposed to solely using HTML5/CSS3.
1.) What are the advantages of Django?
2.) What does utilizing the Django framework offer me that HTML5/CSS3 do not?
3.) HTML5 can also build dynamic web-pages as can Django. Why would Django be better for a dynamic web-page?
I am looking for a very valid answer here as I am about to start building my web-page. The responses I get to these questions will be the nail in the coffin for which method I will be using to build the web-page. Thanks ladies and gentleman and I hope you find this question to be worth your while in answering. | What Are The Advantages of Building a Web-Page with Django as Opposed to Solely Using HTML5/CSS3? | 0.049958 | 0 | 0 | 1,470 |
43,565,014 | 2017-04-22T21:49:00.000 | 1 | 0 | 0 | 0 | python,html,django,dynamic | 43,565,333 | 4 | false | 1 | 0 | 1.) What are the advantages of Django?
Server-side scripting without the necessity to use PHP. If you already worked with Python, you don't need to learn another language for you server-side.
2.) What does utilizing the Django framework offer me that HTML5/CSS3 do not?
Hm, deployment to a server, handling user requests and dynamically generated webpages. You mentioned making an intricate website in a comment. I don't know what you mean by that, but a framework will let you do this way faster then without. Especially, if you only rely on client-side JS with static HTML5 and CSS3, I'm fairly certain you will have a hard time achieving your goal.
3.) HTML5 can also build dynamic web-pages as can Django. Why would Django be better for a dynamic web-page?
I'm not really sure you understand what dynamic means. Dynamic means generated from code, as opposed to static, which means served directly from an .html file. Django let's you do both, it's a framework and offers lots of flexibility. | 4 | 0 | 0 | Essentially, my questions are as stated above in the title. What I'm really seeking to know is why it would be privy of me to build a web-page utilizing the Django framework as opposed to solely building it out with HTML5 and CSS3.
I do know that Django utilizes bootstrapping of HTML5 and CSS and this is where my questions are raised over the efficiency of using the Django framework as opposed to solely using HTML5/CSS3.
1.) What are the advantages of Django?
2.) What does utilizing the Django framework offer me that HTML5/CSS3 do not?
3.) HTML5 can also build dynamic web-pages as can Django. Why would Django be better for a dynamic web-page?
I am looking for a very valid answer here as I am about to start building my web-page. The responses I get to these questions will be the nail in the coffin for which method I will be using to build the web-page. Thanks ladies and gentleman and I hope you find this question to be worth your while in answering. | What Are The Advantages of Building a Web-Page with Django as Opposed to Solely Using HTML5/CSS3? | 0.049958 | 0 | 0 | 1,470 |
43,565,014 | 2017-04-22T21:49:00.000 | 0 | 0 | 0 | 0 | python,html,django,dynamic | 43,584,653 | 4 | false | 1 | 0 | If you want to serve same dish for all visitors to your site, HTML is fine. But if you want to server different dish to different user then you'll need ingredients and a way to churn them. Ingredients can be users, their profile and preferences, location, and other entities users are dealing with. Django is one way to churn all of these together and present (in HTML for example) to users. | 4 | 0 | 0 | Essentially, my questions are as stated above in the title. What I'm really seeking to know is why it would be privy of me to build a web-page utilizing the Django framework as opposed to solely building it out with HTML5 and CSS3.
I do know that Django utilizes bootstrapping of HTML5 and CSS and this is where my questions are raised over the efficiency of using the Django framework as opposed to solely using HTML5/CSS3.
1.) What are the advantages of Django?
2.) What does utilizing the Django framework offer me that HTML5/CSS3 do not?
3.) HTML5 can also build dynamic web-pages as can Django. Why would Django be better for a dynamic web-page?
I am looking for a very valid answer here as I am about to start building my web-page. The responses I get to these questions will be the nail in the coffin for which method I will be using to build the web-page. Thanks ladies and gentleman and I hope you find this question to be worth your while in answering. | What Are The Advantages of Building a Web-Page with Django as Opposed to Solely Using HTML5/CSS3? | 0 | 0 | 0 | 1,470 |
43,565,102 | 2017-04-22T21:57:00.000 | 2 | 0 | 0 | 0 | python,nodes,igraph,vertex | 43,573,147 | 2 | true | 0 | 0 | Take a look at the strength() method of Graph objects - it should do exactly what you need (i.e. calculate the sum of some edge attribute for the incident edges of a given vertex and then assign it to a vertex). | 1 | 1 | 0 | I have an igraph.Graph object with edges having weights. For each vertex I want to sum up the weights of the adjacent edges and assign it to a new vertex attribute gg.vs['weight']. | How do I move the edge weights to Vertex weights in Igraph Python | 1.2 | 0 | 0 | 610 |
43,567,501 | 2017-04-23T04:49:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 43,567,554 | 1 | true | 0 | 0 | Subtract y from x, then divide the result by 24. If there is no remainder then they will be equal at some point.
(x - y) % 24 = 0 | 1 | 0 | 0 | I am working on a python Script and I have two variables
count or x which 4991
and toWhat or y which is 1199.
These values will change, but when they do change, I need to make sure that y will eventually equal x if you add +24 to y after a iteration of while loop.
How can I create a function that will check the values of x and y to make sure this is the case? | How to get x to always eventually equal y by adding steps of 24 | 1.2 | 0 | 0 | 39 |
43,567,749 | 2017-04-23T05:27:00.000 | 0 | 0 | 1 | 0 | python,pyserial | 45,462,345 | 1 | false | 0 | 0 | if you are using conda/miniconda you can get pyserial 3.4 now via the conda-forge channel
To install this package with conda run:
conda install -c conda-forge pyserial | 1 | 0 | 0 | I initially installed python in miniconda (python3.6). So when I did pip install pyserial , it installed pyserial under miniconda3/lib/python3.6/site-packages
Later I also installed python2.7. How can I install pyserial (or atleast tell it to use the above pyserial version) for python2.7? | Pyserial install for different python versions | 0 | 0 | 0 | 273 |
43,568,198 | 2017-04-23T06:46:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 43,568,360 | 2 | false | 0 | 0 | GIL is not a problem of python itself, it is a problem of its cpython implementation, as the memory management in cpython implementation is not threadsafe.
Cpython is implemented with a global lock on the python interpreter. So, one particular python CPU bound operation can be run on a particular interpreter at a particular time. So, no matter you are running single/multiple thread, it's all the same.
But, imagine, if there is some I/O bound task, like database queries, or file operations, where python code isn't actually get executed, you can be benefited greatly with multithreading. | 2 | 2 | 0 | I was told that using threading in Python is not a good practice because of the GIL. I think the overhead of creating threads will just slow things down and eventually make it slower than a single threaded application.
Then, why would Python have the threading library in the first place? When should you use threading?
(I am assuming Python3) | When do you use threading in Python? | 0 | 0 | 0 | 75 |
43,568,198 | 2017-04-23T06:46:00.000 | 1 | 0 | 1 | 0 | python,python-3.x | 43,568,510 | 2 | false | 0 | 0 | @kindall has given the answer in his comment.
Multi-threading is not a magic tool that speeds up any processing. It is a way to have multiple threads present in system at the same time. It can be used to split processing on multiple cores for compute-bound processing (this one not in CPython because of the Global Interpreter Lock). But it can also be used for io-bound processing: one thread runs while the others wait for io completion. A nice example of that is a multithreaded TCP server where each thread can serve a different client connection. CPython implemention is good here because the GIL has no reason to block a thread that is already blocked at io level.
And just to end on general multi-threading: it should never be used for memory bound processing... | 2 | 2 | 0 | I was told that using threading in Python is not a good practice because of the GIL. I think the overhead of creating threads will just slow things down and eventually make it slower than a single threaded application.
Then, why would Python have the threading library in the first place? When should you use threading?
(I am assuming Python3) | When do you use threading in Python? | 0.099668 | 0 | 0 | 75 |
43,569,402 | 2017-04-23T09:25:00.000 | 0 | 0 | 1 | 0 | python-3.x | 43,569,454 | 2 | false | 0 | 0 | What you have there is not ASCII, as it contains for instance the byte \xe6, which is higher than 127. It's still UTF8.
The representation of the string (with the 'b' at the start, then a ', then a '\', ...), that is ASCII. You get it with repr(yourstring). But the contents of the string that you're printing is UTF8.
But I don't think you need to turn that back into an UTF8 string, but it may depend on the rest of your code. | 1 | 0 | 0 | Python 3.6
I converted a string from utf8 to this:
b'\xe6\x88\x91\xe6\xb2\xa1\xe6\x9c\x89\xe7\x94\[email protected]'
I now want that chunk of ascii back into string form, so there is no longer the little b for bytes at the beginning.
BUT I don't want it converted back to UTF8, I want that same sequence of characters that you ses above in my Python string.
How can I do so? All I can find are ways of converting bytes to string along with encoding or decoding. | In Python 3, how can I convert ascii to string, *without encoding/decoding* | 0 | 0 | 0 | 3,073 |
43,571,768 | 2017-04-23T13:26:00.000 | 0 | 0 | 1 | 0 | themes,python-sphinx,read-the-docs | 43,587,070 | 2 | false | 1 | 0 | ReadTheDocs doesn't purge the build directory on every new build. I fixed it by manually cleaning all temporary files from ReadTheDocs:
all build directories
created content from autoapi plugins
After that, my modified RTD theme was used for new builds.
To do so I added a tag detection in conf.py. If my tag is set, it does a cleanup. You can set user defined tags in the ReadTheDocs web UI. | 1 | 1 | 0 | I am facing a strange problem. I added latest sphinx_rtd_theme files on to my project and did the required theme overrides (including the well-known text wrapping within the tables).
The local build works absolutely fine. But it was pushed to the master, the theme goes back to very basic and it is not getting applied. I tried manual builds but still I could not trace the root cause and the workaround.
By master, I mean the builds that are created on the ReadTheDocs website (account). | sphinx_rtd_theme is not getting applied on the ReadTheDocs builds but local builds work fine | 0 | 0 | 0 | 1,500 |
43,574,199 | 2017-04-23T17:17:00.000 | 1 | 0 | 1 | 0 | python | 43,574,355 | 1 | true | 0 | 0 | cp means CPython - reference implementation of Python written in C.
Probably, it is the version you need, unless you are using some other Python implementation, like PyPy or Jython. | 1 | 0 | 0 | I was trying to download pillow from Unofficial Windows
Binaries for Python Extension Packages and there is a cp version, what is it?
Do I have to download it or does it come by default? | What is cp in Unofficial Windows Binaries for Python Extension Packages? | 1.2 | 0 | 0 | 718 |
43,575,436 | 2017-04-23T19:15:00.000 | 1 | 0 | 0 | 1 | python-2.7,cqlsh,cassandra-2.2 | 62,266,742 | 2 | false | 0 | 0 | For centos 8 and other similarly:
Install python 2.7
Then, prior to invoking cqlsh, run:
sudo alternatives --set python /usr/bin/python2 | 1 | 1 | 0 | I am trying to install cassandra version 2.2.0 and I found the compatible python version for it is 2.7.10 then I installed it.
when I type in terminal
python2.7 --version
Python 2.7.10
but when I launch cassandra server and want to start cassandra query language shell by typing
root@eman:/usr/local/cassandra# bin/cqlsh
bin/cqlsh: 19: bin/cqlsh: python: not found
how could I fix this issue
thanks in advance | launching cassandra cqlsh python not found | 0.099668 | 0 | 0 | 2,108 |
43,575,874 | 2017-04-23T20:01:00.000 | 0 | 0 | 1 | 0 | python,data-structures | 43,576,060 | 1 | false | 0 | 0 | Python does not provide a way to prevent resizing of dicts. | 1 | 1 | 0 | Python doesnt have built in fixed size arrays, all arrays are resizable in python. A hashtable uses arrays because they have a fixed size for efficiency. Pythons list auto resizes, this is a costy operation which in my opinion eliminates part of upside of using hashtables because of their efficiency. Is there a way to make fixed size hashtables in python? | Is it possible to create a non resizable hashtable in Python? | 0 | 0 | 0 | 61 |
43,578,201 | 2017-04-24T00:57:00.000 | 0 | 0 | 1 | 0 | python,python-docx | 43,578,375 | 1 | false | 0 | 0 | I can think of a few approaches. The one that's best for you may depend on the variation in cell height and so on.
One way is just to keep extending the table and not worry about page breaks. You'd have to get the table settings right, but there are options for whether it can break a row (or maybe a cell) across pages, and you would set that to no. If python-pptx doesn't have that option you can just set it in an empty table in the template document and extend that table rather than create a new one.
Another is to insert a page break and add a new table. You would need to set the cell widths explicitly in the new tables. There's no option for copying a table. Assuming there are a fixed number of labels in particular positions on each page, this should be straightforward. | 1 | 0 | 0 | I'm attempting to modify an existing one page label template (.docx) to add items from a list of dictionaries. Navigating the list and adding text content to each cell in the table is no problem with python-docx module but I need to create a new page which contains a table with identical formatting as the first table. Manually, this is easy to do by tabbing off the last cell on the page.
My idea was to add a page break at the end of the table and then create the table on the new page based on the formatting of the original table. I've not been able to get the code to work for this.
For now I've manually extended the label document by just tabbing to create a bunch of new pages but this is not ideal because the number of items in the list can vary widely (from 5 to ~1500 addresses). I don't want a lot of extra blank pages at the end of half the documents. | Adding pages to a docx label template | 0 | 0 | 0 | 397 |
43,578,533 | 2017-04-24T01:41:00.000 | 0 | 0 | 0 | 0 | python,numpy,pyspark,anaconda | 69,907,112 | 2 | false | 0 | 0 | Apart from upgrading and re-installing, sometimes it is caused by your Pandas. It might have dependency on older numpy so you may have to upgrade or reinstall pandas if upgrading numpy alone didn't resolve your problem. | 1 | 7 | 1 | I was wondering if anyone had this issue when running spark and trying to import numpy. Numpy imports properly in a standard notebook, but when I try importing it via a notebook running spark, I get this error. I have the most recent version of numpy and am running the most recent anaconda python 3.6.
Thanks!
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in ()
----> 1 import numpy
/Users/michaelthomas/anaconda/lib/python3.6/site-packages/numpy/__init__.py in ()
144 return loader(*packages, **options)
145
--> 146 from . import add_newdocs
147 __all__ = ['add_newdocs',
148 'ModuleDeprecationWarning',
/Users/michaelthomas/anaconda/lib/python3.6/site-packages/numpy/add_newdocs.py in ()
11 from __future__ import division, absolute_import, print_function
12
---> 13 from numpy.lib import add_newdoc
14
15 ###############################################################################
/Users/michaelthomas/anaconda/lib/python3.6/site-packages/numpy/lib/__init__.py in ()
6 from numpy.version import version as __version__
7
----> 8 from .type_check import *
9 from .index_tricks import *
10 from .function_base import *
/Users/michaelthomas/anaconda/lib/python3.6/site-packages/numpy/lib/type_check.py in ()
9 'common_type']
10
---> 11 import numpy.core.numeric as _nx
12 from numpy.core.numeric import asarray, asanyarray, array, isnan, \
13 obj2sctype, zeros
AttributeError: module 'numpy' has no attribute 'core' | AttributeError: module 'numpy' has no attribute 'core' | 0 | 0 | 0 | 22,785 |
43,579,175 | 2017-04-24T03:07:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,numpy | 43,579,231 | 3 | false | 0 | 0 | random.choices(['a', 'b'], weights=[0.8, 0.2], k=2) | 1 | 1 | 0 | I want to write a program with the following requirement.
arr = ['a', 'b']
How to write a python program which choose a from arr x% of time.
(For example 80% of time).
I have no idea how should I start. Please help.
I know random.choice(arr). But it will give a random choice I can not make it biased. | arr = [a,b] choose a, x% of time | 0 | 0 | 0 | 62 |
43,579,626 | 2017-04-24T04:05:00.000 | 0 | 0 | 0 | 0 | python,pandas,matplotlib,plot | 43,579,860 | 3 | false | 0 | 0 | Make 2 separate dataframes by using boolean masking and the where keyword. The condition would be if >0 or not. Then plot both datframes one by one ,one top of the other, with different parameters for the color. | 1 | 5 | 1 | I have a pandas dataframe where I am plotting two columns out the 12, one as the x-axis and one as the y-axis. The x-axis is simply a time series and the y-axis are values are random integers between -5000 and 5000 roughly.
Is there any way to make a scatter plot using only these 2 columns where the positive values of y are a certain color and the negative colors are another color?
I have tried so many variations but can't get anything to go. I tried diverging color maps, colormeshs, using seaborn colormaps and booleans masks for neg/positive numbers. I am at my wits end. | Pandas Plot With Positive Values One Color And Negative Values Another | 0 | 0 | 0 | 7,728 |
43,580,312 | 2017-04-24T05:19:00.000 | 1 | 0 | 1 | 0 | python,kdb | 43,659,555 | 3 | false | 0 | 0 | I have used the exxeleron qpython library fairly extensively, and have found it to be a nice package for Python <-> kdb+ IPC. Last I recall, it has issues with serialising multibyte characters (at least in Python 2.7) when sending to kdb+, so as a workaround I convert strings/symbols to bytecode and do a `$ or `char$ on the kdb+ side.
It's not the fastest thing in the world - its de/serialisation feels a little less fast than it could be (at least in 2.7 - I haven't tested in Python 3) - but it is a friendly interface to kdb+ IPC from Python. It has nice hooks for sub/pub model (using .receive on the connection object), and is relatively well-documented for something kdb+ related (there's even some nice client examples for pub/sub processing!).
I haven't tested with pyQ, which should in theory be better for doing computation-heavy work as it does as much as possible in kdb+ rather than in Python, but for times when you can offload most of your work to a kdb+ process and want to e.g. analyse results or use Python specific packages (e.g. for NLP/ML etc.) qpython works quite well. | 1 | 3 | 0 | From a thread dating a few years back I found some options to integrate python and kdb, namely
qpt
Dan's tools
PyQ
qPython
The last two seem to be the only ones actively updated at the moment. My question goes to the folks that actually use any (and ideally tried several) of these tools. From your experience, which of the two latter ones is more suitable for me. Selection criteria would be (in that order)
ease of use (I am new to q, ideally I would do more work in python than in q)
documentation (seems to be generally not great on anything kdb)
python 3.x support
speed
If I completely missed a tool that fits my requirements, please let me know. I am aware of threads that raise similar questions, but I am looking for a 2017 answer, not 2015. | Python + Q (KDB) - which tools are easy to use and well maintained | 0.066568 | 0 | 0 | 2,719 |
43,580,876 | 2017-04-24T06:08:00.000 | 0 | 0 | 0 | 0 | python-2.7,ncurses,curses,python-curses | 43,591,335 | 2 | false | 0 | 0 | putwin() and getwin() are the functions for saving and restoring an individual window, and they're available in Python. | 1 | 1 | 0 | I have created window in the curses and created my call flow (data).
window = curses.newwin(2500, 2500, 0, 0)
How should i copy the window content(exact replica) to the file ? | how to write the curses window content to the file in python? | 0 | 0 | 0 | 971 |
43,581,457 | 2017-04-24T06:45:00.000 | 0 | 0 | 0 | 0 | python,mongodb,eve | 43,589,357 | 1 | false | 0 | 0 | No that is not supported, and probably should not (see Andrey Shipilov comment). | 1 | 0 | 0 | Can I perform a PATCH request to collection?
Like UPDATE table SET foo=bar WHERE some>10 in SQL. | Bulk PATCH in python eve | 0 | 1 | 0 | 229 |
43,582,363 | 2017-04-24T07:40:00.000 | 0 | 0 | 1 | 0 | python | 43,582,474 | 4 | false | 0 | 0 | If your sequence is named seq, the "duck typing" way to test immutability would be to try to assign a value to seq[0], and catch the exception if it doesn't work... | 2 | 14 | 0 | For built-in Python types, list is mutable but tuple is not. For other sequences, is there a way to tell whether they are mutable or not? Like a mutable sequence usually has .pop(), .insert(), or .extend() member function? Do all mutable sequences and immutable sequences inherit from separate built-in types, which then can be used to differentiate them? | How to tell if a sequence is mutable or not? | 0 | 0 | 0 | 2,822 |
43,582,363 | 2017-04-24T07:40:00.000 | -5 | 0 | 1 | 0 | python | 43,582,542 | 4 | false | 0 | 0 | An object is immutable if it contains only immutable-typed as sub-objects.
A type is immutable if it is a built-in immutable type: str, int, bool, float, tuple. | 2 | 14 | 0 | For built-in Python types, list is mutable but tuple is not. For other sequences, is there a way to tell whether they are mutable or not? Like a mutable sequence usually has .pop(), .insert(), or .extend() member function? Do all mutable sequences and immutable sequences inherit from separate built-in types, which then can be used to differentiate them? | How to tell if a sequence is mutable or not? | -1 | 0 | 0 | 2,822 |
43,582,979 | 2017-04-24T08:14:00.000 | 0 | 0 | 0 | 0 | python,ros | 43,583,439 | 2 | false | 1 | 0 | gazebo has a node you can start and use it to exchange data:
gazebo_ros_api_plugin
The gazebo_ros_api_plugin plugin, located with the gazebo_ros package,
initializes a ROS node called "gazebo". It integrates the ROS callback
scheduler (message passing) with Gazebo's internal scheduler to
provide the ROS interfaces described below. This ROS API enables a
user to manipulate the properties of the simulation environment over
ROS, as well as spawn and introspect on the state of models in the
environment.
This plugin is only loaded with gzserver. | 1 | 1 | 0 | I am working on Guardian mobile robot. It has 2 ROS packages, one for real robot and other for Gazebo simulation. Mapping and navigation code is available in real robot package and not in Gazebo simulation package. Please tell, how can I run mapping and navigation code in Gazebo package. Thanks | How to run actual robot code in Gazebo Simulation? | 0 | 0 | 0 | 1,003 |
43,584,782 | 2017-04-24T09:44:00.000 | 0 | 0 | 1 | 0 | python,pycharm,anaconda | 43,585,076 | 1 | false | 0 | 0 | I hope you are making sure to select the right interpreter for both the project as well as general interpreter in File->settings. I have installed anaconda for both python 2.7 and python 3. I specify the path of the anaconda version I want to use for the current project and it works fine. | 1 | 1 | 0 | I'm having a little bit of trouble with Pycharm Community recognizing Anaconda 3.6.0. The interpreter works and runs programs, however, it doesn't seem to be reading the code intuitively and providing any of the suggestive features (autocompletion) or colouring for the different text (i.e all text is just grey). So it's more like a featureless text editor that can run code then a sophisticated IDE at the moment.
When using standard Python 2.7 as the interpreter it has all of the normal features that should be appearing. I thought Pycharm had full support for Anaconda and should have these features. Does anyone have any suggestions as to what to do? Thanks! | Pycharm integration with Anaconda 3.6.0 | 0 | 0 | 0 | 677 |
43,591,526 | 2017-04-24T15:00:00.000 | 1 | 1 | 0 | 1 | python-2.7,pyc | 43,591,665 | 1 | false | 0 | 0 | Is there a specific reason you're using the .pyc file? Normally, you'd just add a shebang to the top of your script like so: #!/usr/bin/env python, modify permissions (777 is not necessary, 755 or even 744 would work), and run it $ ./file.py | 1 | 0 | 0 | I have created a compiled python file. When I am executing the file using python command, then it is working fine like below.
$ python file.pyc
But, when I am putting ./ before the filename (file.pyc) like running a .sh file, then it is not working.It is throwing error.
$ ./file.pyc
It is having all the privileges (777).
Is there any way to execute the test.pyc file like we do with a test.sh file?
Regards,
Sayantan | How can i run a compiled python file like a shell script in Unix? | 0.197375 | 0 | 0 | 2,249 |
43,591,621 | 2017-04-24T15:04:00.000 | 4 | 0 | 0 | 0 | python,machine-learning,pickle,random-forest | 43,592,085 | 2 | true | 0 | 0 | In the best case (binary trees), you will have 3 * 200 * (2^30 - 1) = 644245094400 nodes or 434Gb assuming each one node would only cost 1 byte to store. I think that 140GB is a pretty decent size in comparision. | 1 | 13 | 1 | We have trained an Extra Tree model for some regression task. Our model consists of 3 extra trees, each having 200 trees of depth 30. On top of the 3 extra trees, we use a ridge regression.
We trained our model for several hours and pickled the trained model (the entire class object), for later use. However, the size of saved trained model is too big, about 140 GB!
Is there a way to reduce the size of the saved model? Are there any configuration in pickle that could be helpful, or any alternative for pickle? | Trained Machine Learning model is too big | 1.2 | 0 | 0 | 9,137 |
43,592,879 | 2017-04-24T16:04:00.000 | 4 | 0 | 1 | 0 | python,anaconda,spyder | 43,593,102 | 8 | false | 0 | 0 | In Preferences, select Python Interpreter
Under Python Interpreter, change from "Default" to "Use the following Python interpreter"
The path there should be the default Python executable. Find your Python 2.7 executable and use that. | 2 | 42 | 0 | I am using 3.6 Python version in anaconda spyder on my mac. But I want to change it to Python 2.7.
Can any one tell me how to do that? | How to change python version in anaconda spyder | 0.099668 | 0 | 0 | 189,338 |
43,592,879 | 2017-04-24T16:04:00.000 | 4 | 0 | 1 | 0 | python,anaconda,spyder | 54,063,608 | 8 | false | 0 | 0 | Set python3 as a main version in the terminal:
ln -sf python3 /usr/bin/python
Install pip3:
apt-get install python3-pip
Update spyder:
pip install -U spyder
Enjoy | 2 | 42 | 0 | I am using 3.6 Python version in anaconda spyder on my mac. But I want to change it to Python 2.7.
Can any one tell me how to do that? | How to change python version in anaconda spyder | 0.099668 | 0 | 0 | 189,338 |
43,596,345 | 2017-04-24T19:34:00.000 | 0 | 0 | 0 | 0 | python,kernel-density | 43,597,467 | 1 | false | 0 | 0 | I did find how to work around by transforming the dataframe's columns into one single column.
df.stack() | 1 | 0 | 1 | I need to make a single gaussian kernel density plot of a dataframe with multiple columns which includes all columns of the dataframe. Does anyone know how to do this?
So far I only found how to draw a gaussian kernel plot of a single column with seaborn. ax = sns.kdeplot(df['shop1'])
However, neither ax = sns.kdeplot(df)norax = sns.kdeplot(df['shop1','shop2]) do not work.
Otherwise is there a workaround where I could transform the dataframe with shape df.shape(544, 33) to (17952, 2), by appending each columns to eachother?
The dataframe includes normalized prices for one product, whereas each column represents a different seller and the rows indicate date and time of the prices. | python: one kernel density plot which includes multiple columns in a single dataframe | 0 | 0 | 0 | 465 |
43,597,163 | 2017-04-24T20:27:00.000 | 2 | 0 | 1 | 0 | python,simulation,cgns | 47,367,063 | 3 | true | 0 | 0 | If the CGNS file is written with hdf5 (instead of the older ADF versions) you can open them with the python libraries h5py or tables. I use these to read my CGNS files and access them like any other hdf5 file. The same could be said for matlab or any other language... if you can read hdf5 you can read CGNS. I believe CGNS versions 3+ default to hdf5. | 1 | 0 | 0 | How to read cgns file contain mesh in python?
I found one package Pymesh but this package only deal with (read/write 2D and 3D mesh in .obj, .ply, .stl, .mesh).
Does anyone knows any package? | Package that deals with cgns format? | 1.2 | 0 | 0 | 1,206 |
43,597,253 | 2017-04-24T20:35:00.000 | 0 | 1 | 1 | 1 | python,python-3.x,environment-variables,ipython-parallel | 43,619,292 | 1 | false | 0 | 0 | Eventually, I managed to solve this using a startup script for the ipengines (see ipengine_config.py). The startup script defines the path, pythonpath etc prior to starting each ipengine.
However, it is still unclear to me why the same result cannot be achieved by setting these variables prior to starting an ipengine (in the same environment). | 1 | 0 | 0 | Using Windows / ipython v6.0.0
I am running ipcontroller and a couple of ipengines on a remote host and all appears to work fine for simple cases.
I try to adjust the pythonpath on the remote host (where the ipengines run) such that it can locate python user packages installed on the remote host. For some reason the ipengine does not accept this.
I can't figure out where each ipengine gets its pythonpath from. Starting a command prompt, changing the pythonpath and then starting an ipengine in that environment does not help.
In fact, this does not seem to apply to the pythonpath, but also to all other environment variables. All come from somewhere and apparently can't changed such that the ipengine uses these values.
The only option seems to be is to add all packages, required binaries etc, in the directory where the ipengine is started from (since that directory is added to the pythonpath).
This seems rather crude and not very elegant at all. Am I missing something here? | How can I set the pythonpath and path of an ipengine (using ipyparallel)? | 0 | 0 | 0 | 175 |
43,597,831 | 2017-04-24T21:14:00.000 | 1 | 0 | 1 | 0 | python,c++,json,protocol-buffers,flatbuffers | 43,599,013 | 1 | true | 0 | 0 | It looks like indeed it fails to verify required fields are present. This would be easy to add however, you should file an issue (or a PR) on github. | 1 | 1 | 0 | I am trying to use Flatbuffers, and I can validate my JSON response agains the Flatbuffer schema using flatc. it seems like flatc doesn't check if Required fields exist or not? am I missing something or Does flatc not validate Required fields of Flatbuffer schema of the given JSON? | Does flatc validate Required fields of Flatbuffer schema of the given JSON? | 1.2 | 0 | 0 | 494 |
43,603,047 | 2017-04-25T06:20:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 43,603,137 | 1 | false | 0 | 0 | data = ''.join(c for c in data if c.printable) | 1 | 0 | 0 | I have text content which is coming from different languages like chines, Hebrew and so on. By using google translator API converting the text into 'en'. Here problem is google translator is failing when its identifies some special characters like \x11,\x01(unable to display those characters over here) and dropping that set of records. Please suggest some safest way to do this conversion with out dropping records. | Trim unprintable characters using python | 0 | 0 | 0 | 17 |
43,604,414 | 2017-04-25T07:34:00.000 | 0 | 0 | 1 | 0 | python-3.x,pip,selenium-chromedriver | 43,604,515 | 1 | false | 0 | 0 | Isn't the name of the package you want to install rather chromedriver_installer. So try to install chromedriver_installer with pip install chromedriver_installer and see if this works. | 1 | 0 | 0 | When I use the command "pip install chromedriver" and checked the directory. It installed a directory named "chromedriver" and only have init.py and pycache inside, rather than a executable file.
Is this because some errors of my environment and how can I fix it? | pip install chromedriver only installed python bindings | 0 | 0 | 0 | 657 |
43,606,341 | 2017-04-25T09:12:00.000 | 4 | 0 | 0 | 0 | javascript,python | 43,606,517 | 2 | false | 1 | 0 | simply use 'autofocus' in input html <input type="text" name="fname" autofocus> | 1 | 2 | 0 | I'm wondering if there's an easy way (using python/js/html) to automatically select the form to insert credentials.
Basically at the login-page you don't have to click the 'username' form and can type right away.
Thanks! | Type your info without having to click the form | 0.379949 | 0 | 1 | 63 |
43,607,466 | 2017-04-25T09:58:00.000 | 4 | 1 | 0 | 0 | python,python-2.7,amazon-web-services,amazon-ec2 | 43,607,778 | 1 | false | 1 | 0 | I think you need to profile your code locally and ensure it really is CPU bound. Could it be that time is spent on the network or accessing disk (e.g. reading the image to start with).
If it is CPU bound then explore how to exploit all the cores available (and 25% sounds suspicious - is it maxing out one core?). Python can be hard to parallelise due to the (in)famous GIL. However, only worry about this when you can prove it's a problem, profile first! | 1 | 1 | 0 | I'm using g2.2 xlarge instance of amazon.
I'm having this function that takes 3 minutes to run on my laptop which is so slow.
However, when running it on EC2 it takes the same time , sometimes even more.
Seeing the statistics , I noticed EC2 uses at its best 25% of CPU.
I paralleled my code, It's better but I get the same execution time between my laptop and EC2.
for my function:
I have an image as input,I run my function 2 times(image with and without image processing) that I managed to run them in parallel. I then extract 8 text fields from that image using 2 machine learning algorithms(faster-rcnn(field detection)+clstm(text reading) and then the text is displayed on my computer.
Any idea how to improve performance (processing time) in EC2? | Why amazon EC2 is as slow as my machine when running python code? | 0.664037 | 0 | 0 | 1,356 |
43,611,210 | 2017-04-25T12:52:00.000 | 1 | 0 | 1 | 0 | python,grpc | 43,613,508 | 1 | true | 0 | 0 | This is possible in the sense of "nothing is stopping you from doing it" but gRPC Python doesn't provide specific utilities to assist with per-thread state. | 1 | 2 | 0 | In the Python implementation of gRPC servers, is it possible to maintain some persistent per-thread state between requests? (looks like all examples use concurrent.futures.ThreadPoolExecutor, but I haven't found much documentation on what the actual server model is)
This would be for things with a non-negligible setup cost that I'd want to avoid doing on every RPC call, but which I can't rely on being thread-safe. E.g. DB connections, setting up an in-memory cache, etc. | Persistent state between requests in Python gRPC server? | 1.2 | 0 | 0 | 462 |
43,611,550 | 2017-04-25T13:05:00.000 | 2 | 0 | 0 | 0 | python,nlp,nltk | 43,612,292 | 2 | false | 0 | 0 | This is a bit more experimental, but another possibility is to use word embeddings.
The words great and good should have similar occurrence contexts, so their vectors should be similar, you cluster your words like that, and aggregate them into the same word/concept.
Of course this will greatly depend on the corpus and methods you use to generate the embeddings. | 1 | 0 | 1 | I'm looking for a lemmatization module/lib that will transfer a sentence like:
"this is great" to "this is good".
I'm familiar with some of the tools available in nltk such as stemming and lemmatization, however it's not exactly what I'm looking for
My goal is to minimize the variety of ways saying the same thing. | Python: linguistic normalization | 0.197375 | 0 | 0 | 593 |
43,613,798 | 2017-04-25T14:40:00.000 | 2 | 0 | 0 | 1 | python-3.x,server,bottle | 43,635,182 | 1 | false | 0 | 0 | On PythonAnywhere, all you need to do is:
Sign up for an account, and log in.
Go to the "Web" tab
Click the "Add a new web app" button
Select "Bottle"
Select the Python version you want to use
Specify where you want your code files to be
...and then you'll have a bottle server up and running on the Internet, with simple "Hello world" code behind it. You can then change that to do whatever you want. | 1 | 0 | 0 | The project I'm doing requires a server. But, with Bottle I can create only a localhost server. I want to be able to access it anywhere. What do I use? I know about pythonanywhere.com, but I'm not sure as to how to go about it. | How to create an online Bottle server accessible from any system? | 0.379949 | 0 | 1 | 63 |
43,614,750 | 2017-04-25T15:18:00.000 | 0 | 0 | 0 | 0 | python,flask,websocket,python-3.5,gevent | 46,964,623 | 1 | false | 1 | 0 | You can call the ws = environ["wsgi.websocket"] to pull up the websocket interface anywhere in your application. | 1 | 0 | 0 | I have a WebSocketServer with Flask and gevent. I have my own WebsocketApplication inheriting from WebSocketApplication in which I overrite on_open(), on_close(), on_message() and it's working fine. I also have a Method broadcast_message() to send a message to all clients, which is working fine as well. Now I need to call broadcast_message() from outside my WebSocketApplication (e.g. when a special site is visited), but I didn't find any way to do this. I am using Python 3.5. | Websocketserver with Flask and gevent in Python | 0 | 0 | 0 | 108 |
43,617,007 | 2017-04-25T17:10:00.000 | 1 | 0 | 0 | 0 | python,multithreading,networking,server,multicast | 43,617,374 | 1 | true | 0 | 0 | My approach would be to use a udp server that can broadcast to multiple clients. So basically, all the clients would connect to this server during a game session, and the server would broadcast the game state to the clients as it is updated. Since your game is relatively simple this approach would give you real time updates. | 1 | 0 | 0 | I am working on a small programming game/environment in Python to help my younger brother learn to code. It needs to operate over a network, and I am new to network programming. I am going to explain the concept of the game so that someone can point me in the best direction.
The idea is a simple grid of 25x25 'diodes,' squares with fixed positions and editable color values, essentially simulating a very small screen. In addition to the grid display, there is a command window, where Python code can be entered and sent to an instance of InteractiveConsole, and a chat window. A client needs to be able to send Python commands to the host, which will run the code, and then receive the output in the form of a string representing changes to the grid. My concept for doing this involves maintaining a queue on the host side of incoming and outgoing events to handle and relay to the clients on individual threads. Any given command/chat event will be sent to the host and relayed to all clients, including the client who created the event, so that those events are visible to all clients in their command/chat windows. All changes to the grid will originate with the host as a result of processing commands originated from clients and will also be sent out to all clients.
What I primarily don't understand is how to synchronize between all clients, i.e. how to know when a given item in the queue has been successfully sent out to all clients before clearing it from the queue, since any individual thread doing so prematurely will prevent the item from being sent to other clients. This is an extremely open-ended question because I understand that I will definitely need to consume some learning materials before I'm ready to implement this. I'm not asking for a specific solution but rather for some guidance on what general type of solution could work in my situation. I'm doing this in my spare time, so I don't want to spend a month going through networking tutorials that aren't pointing me in a direction that will be applicable to this project. | How To Send Data To Multiple Clients From A Queue | 1.2 | 0 | 1 | 343 |
43,620,849 | 2017-04-25T20:59:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,web-crawler,scrapy-spider,bigdata | 43,621,692 | 1 | false | 0 | 0 | Most search engines don't want you scraping their results, but they do offer alternatives:
Google Custom Search
Google Alerts
Bing API
There are also some services that sell access to what you want. Off the top of my head, I know of:
Brightplanet
Webhose
(I'm not affiliated with any of these, but I have used all of them in the past.) | 1 | 0 | 0 | For example, I am interested in gathering daily information on a specific NBA player.
As far as i know Google do not allow to scraping it results. Does Google offers other possibilities for machine queries? Are they Python Packages to preform those queries? | How to gather information on a given theme using python? | 0.197375 | 0 | 1 | 70 |
43,622,277 | 2017-04-25T22:54:00.000 | 0 | 0 | 0 | 0 | python,django,cassandra,permissions,cqlengine | 43,630,653 | 2 | false | 1 | 0 | Is the system_auth keyspace RF the same as the amount of nodes? Did you try to run a repair on the system_auth keyspace already? If not do so.
For me it sounds like a consistency issue. | 2 | 0 | 0 | I use cqlengine with django. In some occasions Cassandra throws an error indicating that user has no permissions do to something. Sometimes this is select, sometimes this is update or sometimes it is something else. I have no code to share, because there is no specific line that does this. I am very sure that user has all the permissions, and sometimes it works. So if user did not have the permissions it should always throw no permission error.
So what might be the reasons behind this and how to find the problem? | Cassandra: occasional permission errors | 0 | 1 | 0 | 292 |
43,622,277 | 2017-04-25T22:54:00.000 | 0 | 0 | 0 | 0 | python,django,cassandra,permissions,cqlengine | 43,645,204 | 2 | false | 1 | 0 | If you have authentication enabled, make sure you set appropriate RF for keyspace system_auth (should be equal to number of nodes).
Secondly, make sure the user you have created has following permissions on all keyspaces. {'ALTER', 'CREATE', 'DROP', 'MODIFY', 'SELECT'}. If you have the user as a superuser make sure you add 'AUTHORIZE' as a permission along with the ones listed above for that user.
Thirdly, you can set off a read-repair job for all the data in system_auth keyspace by running CONSISTENCY ALL;
SELECT * from system_auth.users ;
SELECT * from system_auth.permissions ;
SELECT * from system_auth.credentials ;
Hope this will resolve the issue ! | 2 | 0 | 0 | I use cqlengine with django. In some occasions Cassandra throws an error indicating that user has no permissions do to something. Sometimes this is select, sometimes this is update or sometimes it is something else. I have no code to share, because there is no specific line that does this. I am very sure that user has all the permissions, and sometimes it works. So if user did not have the permissions it should always throw no permission error.
So what might be the reasons behind this and how to find the problem? | Cassandra: occasional permission errors | 0 | 1 | 0 | 292 |
43,622,947 | 2017-04-26T00:15:00.000 | 3 | 0 | 1 | 0 | python,django,dependencies,pip | 63,472,126 | 3 | false | 0 | 0 | In recent pip versions using pip install -r requirements.txt will fail if you have any conflict in your dependencies specified in requirements.txt. | 1 | 7 | 0 | Is it possible to re-check the dependencies of packages installed with pip? That is, suppose we have a working environment. Then, one of the packages changes (gets upgraded, etc). Is there a command one can run to make to make sure that the dependency tree is still sound and does not have conflicts? | (Re)Checking Dependencies with PIP | 0.197375 | 0 | 0 | 4,445 |
43,624,308 | 2017-04-26T03:07:00.000 | 0 | 0 | 0 | 0 | python,nlp,classification,feature-extraction,sentiment-analysis | 43,625,960 | 1 | false | 0 | 0 | I think you will find that bag-of-words is not so naive. It's actually a perfectly valid way of representing your data to give it to an SVM. If that's not giving you enough accuracy you can always include bigrams, i.e. word pairs, in your feature vector instead of just unigrams. | 1 | 1 | 1 | I have a corpus of around 6000 texts with comments from social network (FB, twitter), news content from general and regional news and magazines, etc. I have gone through first 300 of these texts and tag each of these 300 texts' content as either customer complaint or non-complaint.
Instead of naive way of bag of words, I am wondering how can I accurately extract the features of these complaints and non-complaints texts? My goal is to use SVM or other classification algorithm/ library such as Liblinear to most accurately classify the rest of these texts as either complaint or non-complaint with the current training set of 300 texts. Is this procedure similar to sentiment analysis? If not, where should I start? | How to extract COMPLAINT features from texts in order to classify complaints from non-complaints texts | 0 | 0 | 0 | 319 |
43,624,618 | 2017-04-26T03:41:00.000 | 1 | 0 | 1 | 0 | python,multithreading,scipy,multiprocessing,python-imaging-library | 44,355,165 | 1 | true | 0 | 0 | scipy.misc.imread is safe from multiple threads, but each call locks the global interpreter, so performance won't benefit from multithreading.
It works well from multiprocessing, no unexpected issues. | 1 | 2 | 1 | I have set up a producer/consumer model using python Queues. In one producer I'm reading images using scipy.misc.imread.
Reading images in one thread is not fast enough, it takes ~0.2s per image to read. About 20MB/sec reading from an SSD.
I tried adding another identical thread using python's threading module. However the time spent in scipy.misc.imread increased by approximately double, causing the 2 threads to read images approximately as fast as the 1 did.
I'm sure my SSD can handle 40MB/sec throughput, even with random reads. A dd write test shows 800MB+/sec write speeds.
I am left wondering if scipy.misc.imread runs as a critical region among threads? Would I expect multiprocessing to avoid the problem? | Is scipy.misc.imread safe/efficient to run from multiple threads? | 1.2 | 0 | 0 | 186 |
43,626,115 | 2017-04-26T05:55:00.000 | 6 | 0 | 1 | 0 | python,anaconda,conda | 43,837,450 | 1 | true | 0 | 0 | Have you tried conda install python=3.5.1? It deals with the current root environment instead of creating a separate one. | 1 | 5 | 0 | Currently I have anaconda3 installed in my server with the following version:
Python 3.4.3 |Anaconda 2.3.0 (64-bit)
I want to update the Python to Python 3.5.1.
I know that conda update python updates python to latest version, but I want to update it to only 3.5.1. What will be the command for it? | Update python to a specific version using conda | 1.2 | 0 | 0 | 5,022 |
43,630,471 | 2017-04-26T09:41:00.000 | 1 | 0 | 0 | 0 | python,sockets,udp,buffer | 48,019,506 | 1 | false | 0 | 0 | I also met the same problem. The solution I chose is to turn off socket when I don't need to receive data. Reopen it when I need it. So the data in the buffer is emptied. | 1 | 3 | 0 | I've some problem with Sockets UDP in Python:
I've a software which receives a message in input from a socket and then do some elaborations before wait for another message from the socket.
Let's suppose that in the meanwhile more messages arrive:
If I'm right, they go in a buffer (FIFO) and everytime I listen the socket, I read the oldest one, right?
Is there a way to delete the buffer and everytime read the next message? I want to ignore all the oldest messages...
Another problem is that I've like a tons of message every seconds. How can I empty the buffer if they continue to fill it? | UDP Socket in Python: How to clear the buffer and ignore oldes messages | 0.197375 | 0 | 1 | 2,313 |
43,630,573 | 2017-04-26T09:45:00.000 | 0 | 0 | 0 | 0 | python,django,azure,environment-variables | 43,651,050 | 2 | false | 1 | 0 | Just use import os to import os package in your Django code like settings.py, then get these environment variables which be defined in App settings or Connection strings of the tab Application settings on Azure portal via the code os.getenv("<environ-var-name>").
For the environment variable defined in App settings, you just pass the variable name defined like VARNAME as the argument for os.getenv().
To get the value of the environment variable defined in Connection strings, you need to add the different prefix or surfix for the variable name like VARNAME, as below.
For SQL Database, using prefix SQLAZURECONNSTR with infix symbol _ for VARNAME is SQLAZURECONNSTR_VARNAME.
For SQL Server, using prefix SQLCONNSTR with infix symbol _ for VARNAME is SQLCONNSTR_VARNAME.
For MySQL, using prefix MYSQLCONNSTR with infix symbol _ for VARNAME is MYSQLCONNSTR_VARNAME.
For Custom, using surfix MYSQLCONNSTR with infix symbol _ for VARNAME is VARNAME_CONNECTSTRING.
Hope it helps. | 1 | 0 | 0 | It's my first website on Azure with Python. I developed before with .NET. Sorry if my question is a basic one.
I deployed my Django website on Azure. It works fine with the DB settings in the settings.py.
Next step, I thought, is to transfer the DB settings to Azure's Application Settings - Connection Strings, what I did.
I am aware that I need the prefix MYSQLCONNSTR_connectionString1.
How can I now link my Django/Python WebApp with the Connection String in the Azure Application Settings?
I would be very grateful for help. | Link Azure Connection String with Django WebApp? | 0 | 0 | 0 | 987 |
43,631,554 | 2017-04-26T10:27:00.000 | 0 | 0 | 1 | 0 | python,packaging,pypi | 43,631,959 | 1 | true | 0 | 0 | 15 minutes later I find the answer. sdist only includes the *.py files. I just changed the command to use bdist_wheel and all the files I needed were included. | 1 | 0 | 0 | I'm new to packaging in Python. I've tried to specify my non-python files within setup.py's 'scripts' argument, and also specifying the file within MANIFEST.in, however after I package the file using python setup.py build sdist and install using pip, only the files with the .py extension make it to the site-packages/my_package directory.
Am I missing something? | Why are my non-python files not being packaged? | 1.2 | 0 | 0 | 49 |
43,631,564 | 2017-04-26T10:28:00.000 | 3 | 1 | 0 | 1 | python,c++,c,linux,signal-processing | 43,632,432 | 1 | true | 0 | 0 | "I'm needing to capture the raw data (every few milliseconds) that the microphone provides"
No, you don't. That wouldn't work. Even if you captured that data every millisecond, at exactly a multiple of 1000 microseconds (no jitter), you would have an audio quality that's utterly horrible. A sample frequency of 1000 Hz (once per millisecond) limits the Nyquist frequency to 500 Hz. That's horribly low.
"I want to make real time maginitude analysis". Well, you're ignoring the magnitude of components above 500 Hz, which is about 98% of the audible frequencies.
"real time fft" - same problem, that too would miss 98%.
You can't handle raw audio like that. You must rely on the sound card to do the heavy lifting, to get the timing rights. It can sample sounds every 21 microseconds, with microsecond accuracy. You can talk to the audio card using ALSA or PulseAudio, or a few other options (that's sound on Linux for you). But recommendations there would be off-topic. | 1 | 2 | 0 | I'm needing to capture the raw data (every few miliseconds) that the microphone provides. For preference on Python, but it can be in C/C++ too. I'm using Linux/macOS.
How do I capture the audio wave (microphone input) and what kind of data it will be? Pure bytes? An array with some data?
I want to make real time maginitude analysis and (if magnitude reachs a determined value) real time fft of the microphone signal, but I don't know the concepts about what data and how much data the microphone provides me.
I see a lot of code that sets to capture 44.1kHz of the audio, but does it capture all this data? The portion of data taken depends of how it was programmed? | How to capture the microphone buffer raw data? | 1.2 | 0 | 0 | 1,426 |
43,631,693 | 2017-04-26T10:33:00.000 | 6 | 0 | 0 | 1 | python,hadoop,airflow | 53,409,092 | 5 | false | 0 | 0 | As menioned by Pablo and Jorge pausing the Dag will not stop the task from being executed if the execution already started. However there is a way to stop a running task from the UI but it's a bit hacky.
When the task is on running state you can click on CLEAR this will call job.kill() the task will be set to shut_down and moved to up_for_retry immediately hence it is stopped.
Clearly Airflow did not meant for you to clear tasks in Running state however since Airflow did not disable it either you can use it as I suggested. Airflow meant CLEAR to be used with failed, up_for_retry etc... Maybe in the future the community will use this bug(?) and implement this as a functionality with "shut down task" button. | 2 | 55 | 0 | How can I stop/kill a running task on Airflow UI? I am using LocalExecutor.
Even if I use CeleryExecutor, how do can I kill/stop the running task? | How to stop/kill Airflow tasks from the UI | 1 | 0 | 0 | 83,312 |
43,631,693 | 2017-04-26T10:33:00.000 | 11 | 0 | 0 | 1 | python,hadoop,airflow | 50,707,968 | 5 | false | 0 | 0 | from airflow gitter (@villasv)
" Not gracefully, no. You can stop a dag (unmark as running) and clear
the tasks states or even delete them in the UI. The actual running
tasks in the executor won't stop, but might be killed if the
executor realizes that it's not in the database anymore. " | 2 | 55 | 0 | How can I stop/kill a running task on Airflow UI? I am using LocalExecutor.
Even if I use CeleryExecutor, how do can I kill/stop the running task? | How to stop/kill Airflow tasks from the UI | 1 | 0 | 0 | 83,312 |
43,633,459 | 2017-04-26T11:53:00.000 | 0 | 1 | 0 | 0 | python,linux,security,ssh | 43,633,928 | 1 | false | 0 | 0 | Why don't you just add an ssh-daemon on Port 8443 and use ssh-Agent forwarding?
That way the private key never gets written down on P and you don't have to write and maintain your own program. | 1 | 1 | 0 | A python program P running on server S1, listening port 8443.
Some other services can send id_isa, ip pair to P. P could use this pair and make a ssh connection to the ip (create a ssh process).
How to make protect the id_rsa file even the machine S1 is cracked ? How to let root user can't get the id_rsa content (It seems ssh can use -i keyfile only)?
The main problem is P must save the id_rsa file to the disk,so that ssh can use it to conect to the ip. | how to design this security demands? | 0 | 0 | 0 | 48 |
43,637,515 | 2017-04-26T14:47:00.000 | 0 | 0 | 0 | 0 | python,keras,mse | 43,637,652 | 3 | false | 0 | 0 | Probably some default values that have changed from Keras 1.2.
You should check the default values for you 1.2 code, and set the same value for your new code. | 1 | 4 | 1 | I ran the same code (with the same data) on CPU first using keras 1.2.0 and then keras 2.0.3 in both codes keras is with TensorFlow backend and also I used sklearn for model selection, plus pandas to read data.
I was surprised when I got the MSE(Mean squared error) of 42 using keras 2.0.3 and 21 using keras 1.2.0. Can someone pls explain to me why this is happening? Why I am getting more error using keras 2? Thanks
PS. This result is after editing the code to keras 2 standard. for example in Dense I change keras 1 code to keras 2 standard. | Getting worse result using keras 2 than keras 1 | 0 | 0 | 0 | 417 |
43,638,497 | 2017-04-26T15:28:00.000 | 2 | 1 | 0 | 0 | java,python,import,export,alfresco | 43,650,006 | 2 | true | 1 | 0 | If it is a single archive your best bet is to unpack the acp (just a normal zip file, so any zip tool will work) and manipulate the .XML file inside it, which contains all the metadata, types, associations...
You could then use an XSLT to change the XML file and types and properties inside and rezip it with the rest of the content package.
Another approach can be to add the missing properties and aspects in a new 'legacy'-content model and add it to Alfresco 5.1. Once it is imported you can write a script to transfer the properties to the new model.
Once you are sure everything is copied you can remove the old model. | 2 | 0 | 0 | I exported my documents from Alfresco 4.x and now i need to import them to Alfreco 5.1, however i had different content models. So only think i need is to rewrite types and base url, i have similar types in my new Alfresco, but not the same name and prefix, url. So my question is:
How to rewrite metadata which is stored in ACP file in python or maybe java?
I tried to use zipFile in python, but it gives me only errors and keep convincing me that i dont have zip file. I can't open it in notepad++ because it is not readable. I tried to just read content of file but python give blank line when i try to print it.
EDIT:
Here is a link to my file that i need to open and edit.
DELETED no need for this anymore. | Edit content of acp in python/java | 1.2 | 0 | 0 | 115 |
43,638,497 | 2017-04-26T15:28:00.000 | 0 | 1 | 0 | 0 | java,python,import,export,alfresco | 43,651,233 | 2 | false | 1 | 0 | I'm sorry i see today that i did bad export it had 0 kb so python was right it is empty i don't know how it happened. Thank you all, now i can work with as a zipFile and i will edit xml with metadatas im happy now :) | 2 | 0 | 0 | I exported my documents from Alfresco 4.x and now i need to import them to Alfreco 5.1, however i had different content models. So only think i need is to rewrite types and base url, i have similar types in my new Alfresco, but not the same name and prefix, url. So my question is:
How to rewrite metadata which is stored in ACP file in python or maybe java?
I tried to use zipFile in python, but it gives me only errors and keep convincing me that i dont have zip file. I can't open it in notepad++ because it is not readable. I tried to just read content of file but python give blank line when i try to print it.
EDIT:
Here is a link to my file that i need to open and edit.
DELETED no need for this anymore. | Edit content of acp in python/java | 0 | 0 | 0 | 115 |
43,639,539 | 2017-04-26T16:14:00.000 | 0 | 0 | 0 | 0 | python,openerp,odoo-8 | 43,661,888 | 1 | false | 1 | 0 | I assume the Save button invokes the model's write method. In that case, you can override this method to raise custom error message when the corresponding conditions are met. You have the uid of the user that called the write method, so it must be enough to achieve the desired effect. | 1 | 0 | 0 | In Odoo 8, I added an ir.rule to a model that restricts write access for certain users. I would like to personalize the error message restricted user get after clicking 'Save'. I can't just modify the translation with _() because the new message must be specific to that model.
Is there a way to do this easily in Odoo 8 without having to modify the source code of Odoo itself ? | Odoo 8 : Personalized error message after trying to modify a read-only object | 0 | 0 | 0 | 197 |
43,640,412 | 2017-04-26T17:02:00.000 | 2 | 0 | 1 | 0 | python,python-2.7 | 43,640,543 | 4 | true | 0 | 0 | You can leverage unpacking to accomplish this along with a list comprehension:
D, M, s = [x[0] for x in n]
This effectively loops through the list of tuples taking the first item and resulting in a list that now looks like: [100, 300, 500]
This is then unpacked in: D, M, s
Notice that the code is very simple, easy to read and doesn't require any other constructs to make it work. | 1 | 1 | 0 | So I've completed a project I was working on but I'm trying to find a way to make it more pythonic in a sense that takes less lines and well looks cleaner. I've been told before that if it isn't broken it shouldn't be fix but always looking for a better way to improve my programming.
So I have a tuple n with these values:
n = ((100,200), (300,400),(500,600))
for i, x in enumerate(n):
if i is 0: D = x[0]
if i is 1: M = x[0]
if i is 2: s = x[0]
print D, M, s
where (D, M, s) should print out:
100, 300, 500
Is there a way to write those if statement since they are all going to be the first value always every time it loops through the tuple? | Pythonic way of looping values into variables | 1.2 | 0 | 0 | 55 |
43,641,247 | 2017-04-26T17:50:00.000 | -1 | 1 | 0 | 0 | python,arrays,cluster-computing,python-multithreading,numexpr | 46,969,923 | 1 | false | 0 | 0 | I´m not sure, how numexpr actually works internally, when detect_number_of_threads is called, but maybe it reads out the number of threads that is available to openmp and not the number of threads that were locally set. | 1 | 2 | 1 | I am using numexpr for simple array addition on a remote cluster. My computer has 8 cores and the remote cluster has 28 cores. Numexpr documentation says that "During initialization time Numexpr sets this number to the number of detected cores in the system" But the cluster gives this output.
detect_number_of_cores() = 28
detect_number_of_threads()=8
ALthough when I try to set the number of threads manually to something else(set_num_threads=20) , array operation seems to run faster. But detect_number_of_threads() still gives 8 as output.
Is this a bug? | Numexpr detecting number of threads less than number of cores | -0.197375 | 0 | 0 | 5,204 |
43,642,567 | 2017-04-26T19:05:00.000 | 2 | 0 | 1 | 0 | python,nlp,nltk,wordnet | 48,983,892 | 1 | true | 0 | 0 | I managed to delete duplicate items using wordnet.synsets to get the synonyms and then just iterated through the list to remove duplicates. I'm sure there are more sophisticated methods than iterating through the list but it worked just fine for me. | 1 | 1 | 0 | so this might be a bit of an amateur question but is there a way to remove synonym words from a text (or a list for that matter) using nltk?
by synonym I also mean same words written differently like :
70's and 70s and 70_s
or dog and hound
I would really appreciate some general guide lines or pointing me to a tutorial (which I could not find any).
thanks in advance | remove synonym words from text using nltk | 1.2 | 0 | 0 | 754 |
43,645,519 | 2017-04-26T22:25:00.000 | -2 | 0 | 1 | 0 | visual-studio,python-2.7 | 70,927,973 | 4 | false | 0 | 0 | Leaving this here for the next time I run into this problem. For me, builds were failing via CI (Jenkins), but working for my user. Eventually I figured out that VCPython doesn't install system wide. I had to use runas to open a command prompt as the Local System user and then run the installer from that command prompt. Hopefully this helps someone else! | 1 | 21 | 0 | When I want to install packages, including Jupyter, I get the error that Microsoft Visual C++ 9.0 is required. I get the same error with Pip and pre-compiled binaries on UC website.
I have Visual Studio 17 express installed and I have manually added the path of vcvarsall to my environment.
I also saw solution to update the VS###COMMONTOOLS, however VS###COMMONTOOLS variable doesn't exist.
I am using Windows Server 2012.
How can I proceed? | Microsoft Visual C++ 9.0 is required | -0.099668 | 0 | 0 | 60,407 |
43,646,796 | 2017-04-27T00:44:00.000 | 0 | 1 | 1 | 0 | python,twitter,nlp | 43,656,235 | 1 | false | 0 | 0 | The term assertive is rather relative. However, you have to define certain boundaries for what you think assertiveness is.
I would then gather a few phrases or words that I think are assertive, and use them to filter the existing tweets. | 1 | 0 | 0 | //For my project, I first collected tweets(specific count) using Python and Tweepy
//Did pre-processing to keep meaningful data.
//The next step is to search for tweets which have assertive sentences.
Please Help | How can we filter out tweets from Twitter that contains assertions | 0 | 0 | 0 | 32 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.