Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
45,321,731 | 2017-07-26T08:42:00.000 | 0 | 0 | 0 | 0 | python,pip,package,installation | 65,110,754 | 2 | false | 0 | 0 | Please, try to reinstall whole Visual Studio package in this case | 1 | 0 | 0 | I tried to install python package SimPeg(by setup.py install)
Error:
error: Setup script exited with error: Command "cl.exe /c /nologo /Ox
/W3 /GL /D NDEBUG /MD
-IC:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-package s\numpy\core\include
-IC:\Users\HP\AppData\Local\Programs\Python\Python36\lib\si te-packages\numpy\core\include
-IC:\Users\HP\AppData\Local\Programs\Python\Pytho n36\include -IC:\Users\HP\AppData\Local\Programs\Python\Python36\include /Tcdisc retize\TreeUtils.c
/Fobuild\temp.win-amd64-3.6\Release\discretize\TreeUtils.obj" failed
with exit status 127
C++ builder already installed. Need your help.
Thanks. | Install package error.(cl.exe) | 0 | 0 | 0 | 630 |
45,327,753 | 2017-07-26T12:59:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,data-structures | 45,327,917 | 2 | false | 0 | 0 | OrderedDict objects
Ordered dictionaries are just like regular dictionaries but they remember the order that items were inserted. When iterating over an ordered dictionary, the items are returned in the order their keys were first added. | 1 | 0 | 0 | I have with me two values like p and q where p is an integer and q is a string.
I need a data structure to store such values as (p,q) in python and the structure should be like I have to sort it in the future and print out first n elements.
I have tried dictionaries but after sorting I couldn't display the first n elements as the dictionary is an unordered list.
I won't be changing the values in the future. | Store values in python without dictionary | 0.099668 | 0 | 0 | 128 |
45,327,934 | 2017-07-26T13:06:00.000 | 0 | 0 | 0 | 0 | python,design-patterns,architecture,software-design | 45,329,724 | 1 | true | 0 | 0 | Create a store factory that gives you each type of store.
Those stores should inherit from a generalStore abstract class that defines all those methods and return a productNotFoundException or whatever :)
In each store override what actually exists and when asking for a product, use a try/catch.
You could also have all products inherit from a general product that returns a product that has all getters return null before overriding. | 1 | 0 | 0 | I'm studying patterns writing my own examples. But sometimes I got stuck...
I have four stores:
Store1: meat, ice cream and toilet paper.
Store2: ice cream.
Store3: meat.
Store4: ice cream, pasta, vegetables.
Sometimes I want meat from all stores.
Sometimes only ice cream from all stores.
Sometimes only ice cream from a single store.
How should I design an interface for those classes considering not every method would be implemented in every class? I mean, there's no get_ice_cream for the Store2 since there's no meat in the Store2.
Interface segregation could lead to an "interface explosion".
In other hand... people says that multiple inheritance is a bad decision in almost of cases.
I'm stuck trying to deal with that problem.
It would be very nice to have an way to add new Stores and make my IceCream concrete class be able to get all the ice cream from a list of Store objects simply executing the same get_ice_cream method.
Thanks for any help? | Create an interface for heterogeneous sources | 1.2 | 0 | 0 | 40 |
45,332,160 | 2017-07-26T16:04:00.000 | 1 | 0 | 1 | 0 | python,configparser,python-packaging | 53,334,536 | 4 | false | 0 | 0 | I had this problem as well today with python-2.7.6.
I fixed it by creating an empty __init__.py in the <install_location>/configparser/backports directory.
The pip install did not create it.
We had another version installed by setup.py in a different location which did have the __init__.py file. | 1 | 2 | 0 | I have installed configparser using "pip install configparser" to get configparser-3.5.0, and is on my PYTHONPATH. But when i use it as "import configparser", i am seeing an error "No module named backports.configparser". conigparser.py use this 'backports' module and I see the 'backports' module under the python path but somehow it is unable to identify that module. Can someone give me an idea about how shall i fix this? This certainly looks to me some version problem of configparser, but I did not find any answers so far. Help will be appreciated, thanks | No module named "backport.configparser" | 0.049958 | 0 | 0 | 10,199 |
45,333,099 | 2017-07-26T16:53:00.000 | 1 | 0 | 0 | 0 | python,django,apache,mod-wsgi | 45,333,265 | 1 | true | 1 | 0 | Let wsgi.py but don't make DocumentRoot anything like /home/username/djangosites/project/ which would expose your Python scripts source code, which would definitely be very helpful to malicious users.
All you need to expose is STATIC_ROOT (on STATIC_URL) and MEDIA_ROOT (on MEDIA_URL), you can use the Alias directive for that. Another solution is to use dj-static. | 1 | 0 | 0 | where should I place my wsgi.py?
Do I have to isolate it from my django project folder?
Should I move my django project folder outside of my home directory?
Currently I copied my django project folder to /home/username/djangosites/project/
and my wsgi.py is in the folder /home/username/djangosites/project/project/
In the same folder there are files like settings.py urls.py ...
From the modwsgi documentation:
"Note that it is highly recommended that the WSGI application script
file in this case NOT be placed within the existing DocumentRoot for
your main Apache installation, or the particular site you are setting
it up for. This is because if that directory is otherwise being used
as a source of static files, the source code for your application
might be able to be downloaded.
You also should not use the home directory of a user account, as to do
that would mean allowing Apache to serve up any files in that account.
In this case any misconfiguration of Apache could end up exposing your
whole account for downloading.
It is thus recommended that a special directory be setup distinct from
other directories and that the only thing in that directory be the
WSGI application script file, and if necessary any support files it
requires." | Do I have to isolate my wsgi.py from my django project folder? | 1.2 | 0 | 0 | 302 |
45,334,926 | 2017-07-26T18:33:00.000 | 0 | 0 | 0 | 0 | python,excel,vba,winapi,pywin32 | 45,335,421 | 2 | false | 0 | 0 | I'm not familiar with python syntax, but in VBA you dont put quotes around the row number... Ex: myWorksheet.Rows(10).EntireRow.Hidden = True | 2 | 3 | 0 | I am writing a small program in python with pywin32 that manipulates some data in excel and I want to hide a row in order to obscure a label on one of my pivot tables.
According to MSDN the proper syntax is
Worksheet.Rows ('Row#').EntireRow.Hidden = True
When I try this in my code nothing happens - no error, nor hidden row. I have tried every combination I can think of of ranges to try and feed it but it will not hide the row in the output files.
Anyone know of a solution to this or if it is not handled by pywin?
EDIT:
Upon further debugging, I am finding that when I immediately check, the row's Hidden value is True but when I reach the save point the row is no longer hidden (another print reveals Hidden = False) | Hide row in excel not working - pywin32 | 0 | 1 | 0 | 1,117 |
45,334,926 | 2017-07-26T18:33:00.000 | 1 | 0 | 0 | 0 | python,excel,vba,winapi,pywin32 | 45,335,753 | 2 | false | 0 | 0 | Turns out that a cell merge later in my program was undoing the hidden row - despite the fact that the merged cells were not in the hidden row. | 2 | 3 | 0 | I am writing a small program in python with pywin32 that manipulates some data in excel and I want to hide a row in order to obscure a label on one of my pivot tables.
According to MSDN the proper syntax is
Worksheet.Rows ('Row#').EntireRow.Hidden = True
When I try this in my code nothing happens - no error, nor hidden row. I have tried every combination I can think of of ranges to try and feed it but it will not hide the row in the output files.
Anyone know of a solution to this or if it is not handled by pywin?
EDIT:
Upon further debugging, I am finding that when I immediately check, the row's Hidden value is True but when I reach the save point the row is no longer hidden (another print reveals Hidden = False) | Hide row in excel not working - pywin32 | 0.099668 | 1 | 0 | 1,117 |
45,335,812 | 2017-07-26T19:26:00.000 | 1 | 0 | 1 | 1 | python | 45,335,902 | 3 | false | 0 | 0 | In environmental variables under path, add your python path... you said you already so please ensure is their comma separation between previous path..
And once added save environment variables tab. And close all command prompt then open it.
Then only command prompt will refresh with your python config..
Main thing, if you enter python which mean python 2.
For python3 type, python3 then it should work | 3 | 0 | 0 | I am currently trying to figure out how to set up using python 3 on my machine (Windows 10 pro 64-bit), but I keep getting stuck.
I used the Python 3.6 downloader to install python, but whenever I try to use Command Prompt it keeps saying "'python' is not recognized as an internal or external command, operable program or batch file" as if I have not yet installed it.
Unlike answers to previous questions, I have already added ";C:\Python36" to my Path environment variable, so what am I doing wrong?
I am relatively new to python, but know how to use it on my Mac, so please let me know if I'm just fundamentally confused about something. | Downloading python 3 on windows | 0.066568 | 0 | 0 | 262 |
45,335,812 | 2017-07-26T19:26:00.000 | 0 | 0 | 1 | 1 | python | 45,361,096 | 3 | false | 0 | 0 | Thanks everyone, I ended up uninstalling and then re-downloading python, and selecting the button that says "add to environment variables." Previously, I typed the addition to Path myself, so I thought it might make a difference if I included it in the installation process instead. Then, I completely restarted my computer rather than just Command Prompt itself. I'm not sure which of these two things did it, but it works now! | 3 | 0 | 0 | I am currently trying to figure out how to set up using python 3 on my machine (Windows 10 pro 64-bit), but I keep getting stuck.
I used the Python 3.6 downloader to install python, but whenever I try to use Command Prompt it keeps saying "'python' is not recognized as an internal or external command, operable program or batch file" as if I have not yet installed it.
Unlike answers to previous questions, I have already added ";C:\Python36" to my Path environment variable, so what am I doing wrong?
I am relatively new to python, but know how to use it on my Mac, so please let me know if I'm just fundamentally confused about something. | Downloading python 3 on windows | 0 | 0 | 0 | 262 |
45,335,812 | 2017-07-26T19:26:00.000 | 0 | 0 | 1 | 1 | python | 45,338,244 | 3 | false | 0 | 0 | Why are you using command prompt? I just use the python shell that comes with IDLE. It’s much simpler.
If you have to use command prompt for some reason, you’re problem is probably that you need to type in python3. Plain python is what you use for using Python 2 in the command prompt. | 3 | 0 | 0 | I am currently trying to figure out how to set up using python 3 on my machine (Windows 10 pro 64-bit), but I keep getting stuck.
I used the Python 3.6 downloader to install python, but whenever I try to use Command Prompt it keeps saying "'python' is not recognized as an internal or external command, operable program or batch file" as if I have not yet installed it.
Unlike answers to previous questions, I have already added ";C:\Python36" to my Path environment variable, so what am I doing wrong?
I am relatively new to python, but know how to use it on my Mac, so please let me know if I'm just fundamentally confused about something. | Downloading python 3 on windows | 0 | 0 | 0 | 262 |
45,337,536 | 2017-07-26T21:13:00.000 | 4 | 0 | 1 | 0 | python,jupyter-notebook,jupyter | 45,337,650 | 3 | true | 0 | 0 | Click on the cell you want to run above, go to Cell -> Run All Above | 1 | 3 | 0 | I was working on a long Jupyter notebook and for some reason I had to close it and restart. After that, I'd like to run all the code before the line I was working on. Is there a convenient way to do this? | In Jupyter notebook interfaces, is there a way to run all code before a selected line of code? | 1.2 | 0 | 0 | 1,538 |
45,341,070 | 2017-07-27T03:50:00.000 | 0 | 0 | 1 | 0 | python | 45,341,485 | 2 | false | 0 | 0 | To exit out of the "interactive mode" that you mentioned (the included REPL shell in IDLE) and write a script, you will have to create a new file by either selecting the option from the top navigation bar or pressing Control-N. As for running the file, there's also that option on the navigation bar; alternatively, you can press F5 to run the program. | 1 | 0 | 0 | I know some basics of Java and C++, and am looking to learn Python
I am trying to develop some random stuffs to get a good feel of how it works, but i can only make 1 line scripts that run every time i press enter to go to the next line.
I've seen tutorial videos where they can just open up files from a menu and type away until they eventually run the program.
I'm using IDLE, and i don't see options to open up new stuffs; I can only make one or two line programs. When i tried to make a calculator program, i didnt know how to run it because it ran every line of code i typed in unless there were ...'s under the >>>'s.
I think it's because i am in interactive mode, whatever that is.
How do i turn it off, if that's the problem? | python 3.6.2 not giving me option to create or run files or anything of the sort | 0 | 0 | 0 | 68 |
45,349,040 | 2017-07-27T11:07:00.000 | 8 | 0 | 1 | 0 | python,ide,spyder | 53,994,340 | 2 | false | 0 | 0 | Right clicking in the text window there are menus Zoom in, Zoom out and Zoom reset | 1 | 27 | 0 | Is there an option in Spyder to set/change a default script window (editor) scale?
I always have to adjust the script display for each script when I open Spyder with ctrl + mouse wheel and it annoys me a lil bit.
I searched it Spyder preferences and in google, but didn't find anything helpful. | spyder change editor default font/scale/zoom | 1 | 0 | 0 | 59,955 |
45,349,838 | 2017-07-27T11:42:00.000 | 0 | 0 | 0 | 0 | python,vk | 46,219,457 | 2 | false | 0 | 0 | The answer is to use the 'offset' parameter. I was not aware of it when I asked the question...
I was able to get max of about 4K videos but it take more than 20 API calls as some of the results are repeated i.e., if on the first call I got 200 videos on the second API call with offset 200 I am getting a list of 200 videos but some of them already appeared in the previous list.
In addition, you don't always get 200 videos. Sometimes you will get less (and still repetitions are possible). | 1 | 0 | 0 | I just started using vk api in python and I am looking for a way to get more than 200 videos (possibly by using multiple api calls) for a specific query.
To be more specific, each api call to video.search returns the number of videos that the search yields (the same number can be seen when searching from the website). Is there a way to get let's say the next videos in that list?
thanks!
:-) | Get more than 200 results on vk vides.search api | 0 | 0 | 1 | 2,021 |
45,350,985 | 2017-07-27T12:34:00.000 | 2 | 0 | 1 | 0 | python-3.x,asynchronous,parallel-processing,python-asyncio | 46,352,707 | 1 | true | 0 | 0 | Any I/O bound task would be a good case for asyncio. In the context of the network programming - any application, that requires simultaneous handling of the thousands of connections. Web server, web crawler, chat backend, MMO game backend, torrent tracker and so on. Keep in mind, though, that you should go async all the way and use async versions of all libraries performing blocking I/O, like the database drivers, etc. | 1 | 6 | 0 | Feeling the need to learn how to use asyncio, but cannot think of applicable problem (or problem set) that can help me learn this new technique.
Could you suggest a problem that can help me understand and learn asyncio usage in practice?
In another words: can you suggest me an example of some abstract problem or application, which, while coding it, will help me to learn how to use asyncio in practice.
Thank you | Python asyncio training exercises | 1.2 | 0 | 0 | 1,304 |
45,351,555 | 2017-07-27T12:59:00.000 | 2 | 0 | 0 | 0 | python,python-3.x,pyglet | 45,351,754 | 2 | false | 0 | 1 | Drawing the sprite you want to be on the top later than the bottom sprites should accomplish what you want. | 1 | 2 | 0 | I am trying to bring sprites to the foreground so that they are on the top layer of an image. This is the line i am using to assign the value of the image
ball = pyglet.sprite.Sprite(ball_image, 50, 50)
Is there a property that i can add to this line that will draw the image on the foreground ?
edit: I am trying to keep the first image stay in the foreground regardless if its drawn before or after the second image. | In Pyglet is there way to bring sprites to foreground | 0.197375 | 0 | 0 | 766 |
45,360,501 | 2017-07-27T20:32:00.000 | 1 | 0 | 1 | 0 | python | 45,360,590 | 2 | true | 0 | 0 | That purely depends on your process flow but it basically boils down to whether you want to deal with an encountered error (and how), or do you want to propagate it to the user of your code - or both.
logging.error() is typically used to log when an error occurs - that doesn't mean that your code should halt and raise an exception as there are recoverable errors. For example, your code might have been attempting to load a remote resource - you can log an error and try again at later time without raising an exception.
btw. you can use logging to log an exception (full, with a stack trace) by utilizing logging.exception(). | 2 | 0 | 0 | I am using logging throughout my code for easier control over logging level. One thing that's been confusing me is
when do you prefer logging.error() to raise Exception()?
To me, logging.error() doesn't really make sense, since the program should stop when encountered with an error, like what raise Exception() does.
In what scenarios, do we put up an error message with logging.error() and let the program keep running? | Python -- when to choose logging.error() over raise Exception()? | 1.2 | 0 | 0 | 95 |
45,360,501 | 2017-07-27T20:32:00.000 | 3 | 0 | 1 | 0 | python | 45,360,543 | 2 | false | 0 | 0 | Logging is merely to leave a trace of events for later inspection but does absolutely nothing to influence the program execution; raising an exception allows you to signal an error condition to higher up callers and let them handle it. Typically you use both together, it's not an either-or. | 2 | 0 | 0 | I am using logging throughout my code for easier control over logging level. One thing that's been confusing me is
when do you prefer logging.error() to raise Exception()?
To me, logging.error() doesn't really make sense, since the program should stop when encountered with an error, like what raise Exception() does.
In what scenarios, do we put up an error message with logging.error() and let the program keep running? | Python -- when to choose logging.error() over raise Exception()? | 0.291313 | 0 | 0 | 95 |
45,360,899 | 2017-07-27T20:58:00.000 | 4 | 0 | 1 | 0 | algorithm,python-2.7,python-3.x,parsing | 45,360,968 | 3 | true | 0 | 0 | In plain Python, not using a third-party extension, a.split() ought to be the fastest way to split your input into a list. The str.split() function only has one job and it is specialized for this use. | 1 | 3 | 0 | I am trying create a large list of numbers .
a = '1 1 1 2 2 0 0 1 1 1 1 9 9 0 0' (it goes over a ten million).
I've tried these methods:
%timeit l = list(map(int, a.split())) it was 4.07 µs per loop
%timeit l = a.split(' ') this was 462 ns per loop
%timeit l = [i for i in a.split()] it took 1.19 µs per loop
I understand that the 2nd and 3rd variants are character lists whereas first is an integer list, this is fine. But as the number of elements gets to over ten million, it can take up to 6 seconds to create a list. This is too long for my purposes.
Could someone tell me a more faster and efficient way to do this.
Thanks | Need a faster and efficient way to add elements to a list in python | 1.2 | 0 | 0 | 1,238 |
45,362,440 | 2017-07-27T23:12:00.000 | 1 | 0 | 0 | 0 | python-3.x,odbc,driver,32bit-64bit,pyodbc | 45,365,583 | 1 | true | 0 | 0 | A 32bit application can NOT invoke a 64bit dll, so python 32bit can not talk to a 64bit driver for sure.
msodbc driver for sql server is in essence a dll file: msodbcsql13.dll
I just found out (which is not even mentioned by microsoft) that "odbc for sql server 13.1 x64" will install a 64bit msodbcsql13.dll in system32 and a 32bit msodbcsql13.dll in SysWOW64 ( 32bit version of "system32" on a 64bit windows system)
I can not however be certain that the network protocol between a 32bit client talking to 64bit sql server will be the same as a 64bit client talking to a 64bit sql server. But, I believe that, once a request is put on the network by the client to the server, 32bit or 64bit doesn't matter anymore. Someone please comment on this | 1 | 0 | 0 | What I can observe:
I am using windows 7 64bit My code (establish an odbc connection with
a SQL server on the network, simple reading operations only) is
written in python 3.6.2 32bit
I pip installed pyodbc, so I assume that was 32bit as well.
I downloaded and installed the 64bit "Microsoft® ODBC Driver 13.1 for SQL Server®" from microsoft website.
My python code connects to
other computers on the network, which run server2003 32bit and either SQL Server 2005(32bit) or sql2008(32bit).
The setup works.
Moreover: cursory test shows that, the above setup can successfully connect to a computer with Microsoft server2008(64bit) running sql2012(64bit) with the configuration under "SQL Server Network Connection (32bit)" being empty (meaing, the 32bit dll is missing), while the default 64 bit network connection configuration contains the usual config options like ip adress and listening port info.
My own explanation:
[1] the client and the server's OS and ODBC interfaces can be of any 32/64 bit combination, but the protocol that travels thru the network between my computer and the sql computer will be identical.
[2] 32 bit python+pyodbc can talk to microsoft's 64bit odbc driver, because... 32 bit python knows how to use a 64 bit DLL...? | 32bit pyodbc for 32bit python (3.6) works with microsoft's 64 bit odbc driver. Why? | 1.2 | 1 | 0 | 1,620 |
45,366,023 | 2017-07-28T06:08:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,function,object | 45,366,148 | 1 | true | 0 | 0 | As soon as you define a function or method (which is nothing but a bound function), Python creates a Function instance. This happens when your code is run for the first time.
Yes, it is a "waste" of memory, but consider how much memory that is compared to big arrays, binary files etc. Python is definitely not the most performant or resource-light language/interpreter, but it saves you lots of time on writing code (because your write less) and caring about optimisation (you usually don't). I mean seriously, what do a few KB in file size matter nowadays? Surely the loss in value is less than a minute of your attention.
The reason those unused functions can't be optimised away is that they might be used later on in the same script or by other scripts. | 1 | 1 | 0 | As I understand from a book that function in Python is nothing but object of Function class. I have some doubts as below:
1.When this object gets created? At the time we define function or at the time we call a function?
2.If it is getting created at the time we define a function, then will it not be a waste of memory if we do not call that function anywhere in program ?
Looking for detail answer. | Python-Function object creation | 1.2 | 0 | 0 | 53 |
45,368,358 | 2017-07-28T08:18:00.000 | 0 | 0 | 1 | 0 | python-3.x,user-interface,tkinter,pyinstaller,tkinter-canvas | 51,119,738 | 2 | false | 0 | 1 | Another option would be to manually open a CMD window, navigate to, and then execute your exe, rather than letting the packaged application spawn the instance. | 1 | 1 | 0 | I'm working on a GUI that I would like to put at the disposal for my colleagues to use under the form of .exe , after some researchs i found pyinstaller as "freezer" which work great after downloading the github version , but my issue is even if the .exe is created when i run it , it show up for less than a second on the screen and it disapears
I would like to know how to keep it on the screen (most important part) and getting it closed when the user close it himself..
Thanks in advance for the help! | avoiding a pyinstaller .exe disapear of the screen without closing | 0 | 0 | 0 | 177 |
45,369,106 | 2017-07-28T08:57:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,statistics,logistic-regression,prediction | 45,370,757 | 1 | false | 0 | 0 | Your best choice would be to use L1 regularized logistic regression (aka Lasso regression). In case you're not familiar with it, the algorithm automatically selects some of the features by penalizing those that do not lead to increased accuracy (in layman terms).
You can increase/decrease this regularization strength (it's just a parameter) till your model achieved the highest accuracy (or some other metric) on a test set or in a cross-validation procedure. | 1 | 0 | 1 | I have a dataset with 330 samples and 27 features for each sample, with a binary class problem for Logistic Regression.
According to the "rule if ten" I need at least 10 events for each feature to be included. Though, I have an imbalanced dataset, with 20% o positive class and 80% of negative class.
That gives me only 70 events, allowing approximately only 7/8 features to be included in the Logistic model.
I'd like to evaluate all the features as predictors, I don't want to hand pick any features.
So what would you suggest? Should I make all possible 7 features combinations? Should I evaluate each feature alone with an association model and then pick only the best ones for a final model?
I'm also curious about the handling of categorical and continuous features, can I mix them? If I have a categorical [0-1] and a continuous [0-100], should I normalize? | Performing Logistic Regression with a large number of features? | 0 | 0 | 0 | 772 |
45,370,442 | 2017-07-28T09:58:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,pca,svd,normalize | 45,370,664 | 2 | false | 0 | 0 | Any features that only have zeros (or any other constant value) in the training set, are not and cannot be useful for any ML model. You should discard them. The model cannot learn any information from them so it won't matter that the test data do have some non-zero values.
Generally, you should do normalization or standardization before feeding data for PCA/SVD, otherwise these methods will catch wrong patterns in the data (e.g. if features are on a different scale between each other).
Regarding the reason behind such a difference in the accuracy, I'm not sure. I guess it has to do with some peculiarities of the dataset. | 2 | 2 | 1 | I'm having data with around 60 features and most will be zeros most of the time in my training data only 2-3 cols may have values( to be precise its perf log data). however, my test data will have some values in some other columns.
I've done normalization/standardization(tried both separately) and feed it to PCA/SVD(tried both separately). I used these features in to fit my model but, it is giving very inaccurate results.
Whereas, if I skip normalization/standardization step and directly feed my data to PCA/SVD and then to the model, its giving accurate results(almost above 90% accuracy).
P.S.: I've to do anomaly detection so using Isolation Forest algo.
why these results are varying? | Is it good to normalization/standardization data having large number of features with zeros | 0.099668 | 0 | 0 | 2,142 |
45,370,442 | 2017-07-28T09:58:00.000 | 3 | 0 | 0 | 0 | python,machine-learning,pca,svd,normalize | 45,371,303 | 2 | true | 0 | 0 | Normalization and standarization (depending on the source they sometimes are used equivalently, so I'm not sure what you mean exactly by each one in this case, but it's not important) are a general recommendation that usually works well in problems where the data is more or less homogeneously distributed. Anomaly detection however is, by definition, not that kind of problem. If you have a data set where most of the examples belong to class A and only a few belong to class B, it is possible (if not necessary) that sparse features (features that are almost always zero) are actually very discriminative for your problem. Normalizing them will basically turn them to zero or almost zero, making it hard for a classifier (or PCA/SVD) to actually grasp their importance. So it is not unreasonable that you get better accuracy if you skip the normalization, and you shouldn't feel you are doing it "wrong" just because you are "supposed to do it"
I don't have experience with anomaly detection, but I have some with unbalanced data sets. You could consider some form of "weighted normalization", where the computation of the mean and variance of each feature is weighted with a value inversely proportional to the number of examples in the class (e.g. examples_A ^ alpha / (examples_A ^ alpha + examples_B ^ alpha), with alpha some small negative number). If your sparse features have very different scales (e.g. one is 0 in 90% of cases and 3 in 10% of cases and another is 0 in 90% of cases and 80 in 10% of cases), you could just scale them to a common range (e.g. [0, 1]).
In any case, as I said, do not apply techniques just because they are supposed to work. If something doesn't work for your problem or particular dataset, you are rightful not to use it (and trying to understand why it doesn't work may yield some useful insights). | 2 | 2 | 1 | I'm having data with around 60 features and most will be zeros most of the time in my training data only 2-3 cols may have values( to be precise its perf log data). however, my test data will have some values in some other columns.
I've done normalization/standardization(tried both separately) and feed it to PCA/SVD(tried both separately). I used these features in to fit my model but, it is giving very inaccurate results.
Whereas, if I skip normalization/standardization step and directly feed my data to PCA/SVD and then to the model, its giving accurate results(almost above 90% accuracy).
P.S.: I've to do anomaly detection so using Isolation Forest algo.
why these results are varying? | Is it good to normalization/standardization data having large number of features with zeros | 1.2 | 0 | 0 | 2,142 |
45,370,731 | 2017-07-28T10:11:00.000 | 4 | 0 | 0 | 0 | python-2.7,sockets,tcp,server,client | 45,371,518 | 1 | true | 0 | 0 | It defines the length of the backlog queue, which is the number of incoming connections that have been completed by the TCP/IP stack but not yet accepted by the application.
It has nothing whatsoever to do with the number of concurrent connections that the server can handle. | 1 | 1 | 0 | What does the parameter of 1 mean in the listen(1) method of socket. I am using the socket module in python 2.7 and I have created a basic server that I want to connect to multiple clients (all on a local machine) and transmit data between them. I know there a simpler ways of doing this but I want practice for when the clients would not all be on the same machine and may need to retrieve something from the server first so could not bypass it. I was wondering if the 1 in listen referred to the amount of connections the server would make at a single time and if not what it did mean. I really want to understand in detail how parts of the process work so any help would be appreciated. | What does the parameter of 1 mean in `listen(1)` method of socket module in python? | 1.2 | 0 | 1 | 6,274 |
45,371,167 | 2017-07-28T10:31:00.000 | 0 | 1 | 0 | 0 | php,python,symfony,frameworks | 45,375,722 | 1 | false | 0 | 0 | You can try the Django ORM or SQL Alchemy but the configuration of the models have to be done very carefully. Maybe you can write a parser from Doctrine2 config files to Django models. If you do, open source it please. | 1 | 0 | 0 | I have web application which made by symfony2(php framework)
So there is mysql database handled by doctrine2 php source code.
Now I want to control this DB from python script.
Of course I can access directly to DB from python.
However, it is complex and might break the doctrine2 rule.
Is there a good way to access database via php doctrine from python?? | How to access doctrine database made by php from python | 0 | 1 | 0 | 153 |
45,373,514 | 2017-07-28T12:26:00.000 | 1 | 0 | 0 | 0 | python,html,screen-scraping | 45,373,618 | 3 | false | 1 | 0 | Yes, I believe you should be able to.
Try to lookup the requests and beautifulsoup python modules. | 1 | 0 | 0 | I am fairly new to Python, but i was wondering if i could utilize Python and its modules. To retrieve a href from page 1, and then the first paragraph in page 2.
Q2: Also, how could I scrape the first 10 link hrefs with the same div class on page one, and then scrape the first 10 paragraphs, while looping? | Can Python get a Href link on page one, and then get a paragraph from page 2? | 0.066568 | 0 | 1 | 460 |
45,376,218 | 2017-07-28T14:37:00.000 | 0 | 0 | 0 | 1 | python,windows,powershell,logging,cmd | 45,376,375 | 1 | false | 0 | 0 | I'm not sure but i think you can use tail -f filename even with a log file inside a windows batch | 1 | 0 | 0 | I have a subprocess in Python that I kick off that produces a log file. I need another subprocess to tail this log file as it is being generated and obtain the results at the end of my first subprocess (the thing that generates the log file).
This needs to be achieved on Windows boxes so I cannot use tail. I have looked into Get-Contents but am not entirely sure whether I can make Get-Contents persist and return only when my first subprocess (the log generator) finishes execution.
How would I achieve this? | Tail a log file as subprocess in Python [Windows box] | 0 | 0 | 0 | 307 |
45,380,268 | 2017-07-28T18:36:00.000 | 2 | 1 | 1 | 0 | python | 45,380,349 | 2 | false | 0 | 0 | If you mean Standard Python (CPython) by Python, then no! The byte-code (.pyc or .pyo files) are just a binary version of your code line by line, and is interpreted at run-time. But if you use pypy, yes! It has a JIT Compiler and it runs your byte-codeas like Java dn .NET (CLR). | 1 | 6 | 0 | I'm a little confused as to how the PVM gets the cpu to carry out the bytecode instructions. I had read somewhere on StackOverflow that it doesn't convert byte-code to machine code, (alas, I can't find the thread now).
Does it already have tons of pre-compiled machine instructions hard-coded that it runs/chooses one of those depending on the byte code?
Thank you. | Does the Python Virtual Machine (CPython) convert bytecode into machine language? | 0.197375 | 0 | 0 | 2,285 |
45,385,751 | 2017-07-29T05:45:00.000 | 1 | 0 | 0 | 0 | python,django,database,django-models | 45,386,569 | 1 | false | 1 | 0 | The whole point of migrations is that you run them on both your local database and in production, to keep them in sync. | 1 | 1 | 0 | When we work with Django in local environment, we change the structure of the Data Base using Command Prompt through migration.
But for using Django in server, i don't now how can i apply such changes? How can i type commands to change Data Base structure? Is it a good way to upload site files every time that i do some change again. | Changing database structure of Django in server | 0.197375 | 0 | 0 | 196 |
45,386,603 | 2017-07-29T07:25:00.000 | 5 | 0 | 1 | 0 | python,image | 45,387,187 | 2 | true | 0 | 0 | I encountered a similar problem before. I used PIL.Image.tobytes() to convert the image to a byte object, then call hash() on the byte object and compared the hash values. | 1 | 0 | 0 | I need to copy images from 'Asset' folder in Windows 10 which has background images automatically downloaded. Some of these images will never be displayed and at some point deleted. To make sure I have seen all the new images before they are deleted I have created a Python script that copy these images into a different folder. To efficient I need a way to compare two images those that only the new ones are copied. All I need to do is to have a function that takes two images compare them with a simple approach to be sure that the two images are not visually identical. A simple test would be to take an image file copy it and compare the copy and the original, in which case the function should be able to tell that those are the same images.
How can I compare two images in python? I need simple and efficient way to do it. Several answers I have read are a bit complicated. | Simple Way to Compare Two Images in Python | 1.2 | 0 | 0 | 16,890 |
45,390,326 | 2017-07-29T14:22:00.000 | 8 | 0 | 1 | 0 | python,jupyter-notebook | 50,105,503 | 7 | false | 0 | 0 | Without doing this %config IPCompleter.greedy=True after you import a package like numpy or pandas in this way;
import numpy as np
import pandas as pd.
Then you type in pd. then tap the tab button it brings out all the possible methods to use very easy and straight forward. | 1 | 152 | 0 | I would like to get an autocompletion feature in notebooks i.e. when I type something, a dropdown menu appears, with all the possible things I might type, without having to press the tab button. Is there such a thing?
I tried :
%config IPCompleter.greedy=True
but this requires the tab button to be pressed | How to get autocomplete in jupyter notebook without using tab? | 1 | 0 | 0 | 249,305 |
45,393,694 | 2017-07-29T20:21:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 45,393,759 | 4 | false | 0 | 0 | Use sys.getsizeof to get the size info | 1 | 10 | 0 | How to manually calculate a size of a dictionary (number of bytes it occupies in memory). I read that initially it is 280 bytes, at 6th key it increases and then at 86th so on. I want to calculate the size it will occupy when I have more than 10000 keys. | Size of a dictionary in bytes | 0.049958 | 0 | 0 | 20,444 |
45,394,527 | 2017-07-29T22:08:00.000 | 23 | 0 | 0 | 0 | python,machine-learning,scikit-learn,grid-search | 45,394,598 | 1 | true | 0 | 0 | GridSearchCV will take the data you give it, split it into Train and CV set and train algorithm searching for the best hyperparameters using the CV set. You can specify different split strategies if you want (for example proportion of split).
But when you perform hyperparameter tuning information about dataset still 'leaks' into the algorithm.
Hence I would advice to take the following approach:
1) Take your original dataset and hold out some data as a test set (say, 10%)
2) Use grid search on remaining 90%. Split will be done for you by the algorithm here.
3) After you got optimal hyperparameters, test it on the test set from #1 to get final estimate of the performance you can expect on new data. | 1 | 12 | 1 | Gridsearhcv uses StratifiedKFold or KFold. So my question is that should I split my data into train and test before using gridsearch, then do fitting only for test data? I am not sure whether it is necessary because cv method already splits the data but I have seen some examples which split data beforehand.
Thank you. | Do I need to split data when using GridSearchCV? | 1.2 | 0 | 0 | 8,338 |
45,396,835 | 2017-07-30T05:57:00.000 | 0 | 0 | 0 | 0 | python,django,amazon-web-services,amazon-ec2 | 45,396,854 | 2 | false | 1 | 0 | try to do telnet on that server on port 8000 and see if it works.
did u open port 8000 from security groups or not ! Most probably it is this issue. | 1 | 0 | 0 | I have my django application on ec2 instance. I am able run it on ec2 instance on localhost:8000. When I try to access that django application from outside of that ec2 instance, it doesn't show me "this site can't be reached". It is pinging | Amazon EC2 instance pinging but django server is not accessible from outside | 0 | 0 | 0 | 692 |
45,399,044 | 2017-07-30T10:53:00.000 | 0 | 0 | 0 | 0 | jquery,python,html,ajax,django | 45,399,435 | 1 | false | 1 | 0 | Using JQuery:
Pass whole options as key value pair, ex: {key : {key:val, ...}, ...}
Use JQuery to show and hide equivalent sub options when an
option is clicked.
Using AJAX:
When an option clicked, send an AJAX request to the server.
Fetch its sub options and show in the browser using JQuery. | 1 | 0 | 0 | I have a template where I have a dropdown. I would like to display some data based on the dropdown value selected. The data is fetched from the DB.
As I have no idea about ajax or jquery, I need help here to dynamically display the content based on the dropdown selected.
Example: Say my dropdown is [1,2,3,4]. When user selected 1, it should display ONE in my HTML, if user selected 2 then it should display TWO and so on.
Can some please share me the HTML code for the above example (with ajax or jquery)? | Django - Dynamically get the appropriate data based on the dropdown selection | 0 | 0 | 0 | 408 |
45,399,347 | 2017-07-30T11:32:00.000 | -1 | 0 | 1 | 0 | python,python-3.x,postgresql,psycopg2 | 54,212,351 | 2 | false | 0 | 0 | class psycopg2.extras.RealDictCursor(*args, **kwargs)
A cursor that uses a real dict as the base type for rows. Note that this cursor is extremely specialized and does not allow the normal access (using integer indices) to fetched data. If you need to access database rows both as a dictionary and a list, then use the generic DictCursor instead of RealDictCursor. class psycopg2.extras.RealDictConnection A connection that uses RealDictCursor automatically.
Note
Not very useful since Psycopg2.5: you can use psycopg2.connect(dsn, cursor_factory=RealDictCursor) instead of RealDictConnection. class
psycopg2.extras.RealDictRow(cursor) A dict subclass representing a
data record. | 1 | 15 | 0 | AFAIU and from docs, RealDictCursor is a specialized DictCursor that enables to access columns only from keys (aka columns name), whereas DictCursor enables to access data both from keys or index number.
I was wondering why RealDictCursor has been implemented if DictCursor offers more flexibility? Is it performance-wise (or memory-wise) so different (in favor of RealDictCursor I imagine...)?
In other words, what are RealDictCursor use cases vs DictCursor? | psycopg2: DictCursor vs RealDictCursor | -0.099668 | 1 | 0 | 14,075 |
45,402,583 | 2017-07-30T17:18:00.000 | 0 | 0 | 1 | 0 | python,list,list-manipulation | 45,403,783 | 2 | false | 0 | 0 | you could use python set to do this , By definition a set is a well-defined collection of distinct objects,
if len(set(input_list))! =1:
print "not all items in the set are the same" | 1 | 0 | 0 | How do I check if every value in a list is equal to another value, x? For example, if I had a list that was completely full of the number 100, how would I return false based on that condition. Or if a list was full of the number 100 except for one single element which was 88, then I'd want to return true and for the if statement to execute.
Thank you. | Check If All Elements in a List Are Equal to Another Value | 0 | 0 | 0 | 8,103 |
45,404,241 | 2017-07-30T20:12:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,postgresql,orm | 45,404,605 | 3 | true | 1 | 0 | For sure there are other ways, if that's what you're asking. But Django ORM is quite flexible overall, and if you write your queries carefully there will be no significant overhead. 50000 rows in 15 minutes is not really big enough. I am using Django ORM with PostgreSQL to process millions of records a day. | 1 | 0 | 0 | I have a project that :
fetches data from active directory
fetches data from different services based on active directory data
aggregates data
about 50000 row have to be added to database in every 15 min
I'm using Postgresql as database and django as ORM tool. But I'm not sure that django is the right tools for such projects. I have to drop and add 50000 rows data and I'm worry about performance.
Is there another way to do such process? | Collecting Relational Data and Adding to a Database Periodically with Python | 1.2 | 1 | 0 | 81 |
45,405,321 | 2017-07-30T22:26:00.000 | 0 | 0 | 1 | 0 | java,python,multithreading,concurrency,web-crawler | 45,405,432 | 4 | false | 0 | 0 | for crawler don't use ConcurrentHashMap, rather use Databse
The number of visisted URL's will grow very fast, so it is not a good thing to store them in memory, better use a databese, store the URL and the date it was last crawled, then just check the URL if it already exists in DB or is eligible for refreshing. I use for example a Derby DB in embedded mode, and it works perfectly for my web crawler. I don't advise to use in memory DB like H2, because with the number of crawled pages you eventually will get OutOfMemoryException.
You will rather rarely have the case of crawling the same page more than once in the same time, so checking in DB if it was already crawled recently is enough to not waste significant resources on "re-crawling the same pages over and over". I belive this is "a good solution that's not too dense and not too simplistic"
Also, using Databse with the "last visit date" for url, you can stop and continue the work when you want, with ConcurrentHashMap you will loose all the results when app exit. You can use "last visit date" for url to determine if it needs recrawling or not. | 2 | 1 | 0 | I'm playing around writing a simple multi-threaded web crawler. I see a lot of sources talk about web crawlers as obviously parallel because you can start crawling from different URLs, but I never see them discuss how web crawlers handle URLs that they've already seen before. It seems that some sort of global map would be essential to avoid re-crawling the same pages over and over, but how would the critical section be structured? How fine grained can the locks be to maximize performance? I just want to see a good example that's not too dense and not too simplistic. | Do concurrent web crawlers typically store visited URLs in a concurrent map, or use synchronization to avoid crawling the same pages twice? | 0 | 0 | 1 | 875 |
45,405,321 | 2017-07-30T22:26:00.000 | 2 | 0 | 1 | 0 | java,python,multithreading,concurrency,web-crawler | 45,409,820 | 4 | false | 0 | 0 | Specific domain use case : Use in memory
If it is specific domain say abc.com then it is better to have vistedURL set or Concurrent hash map in memory, in memory will be faster to check visited status, memory consumption will be comparatively less. DB will have IO overhead and it is costly and visited status check will be very frequent. It will hit your performance drastically. As per your use case, you can use in memory or DB. My use case was specific to domain where visited URL will not be again visited so I used Concurrent hash map. | 2 | 1 | 0 | I'm playing around writing a simple multi-threaded web crawler. I see a lot of sources talk about web crawlers as obviously parallel because you can start crawling from different URLs, but I never see them discuss how web crawlers handle URLs that they've already seen before. It seems that some sort of global map would be essential to avoid re-crawling the same pages over and over, but how would the critical section be structured? How fine grained can the locks be to maximize performance? I just want to see a good example that's not too dense and not too simplistic. | Do concurrent web crawlers typically store visited URLs in a concurrent map, or use synchronization to avoid crawling the same pages twice? | 0.099668 | 0 | 1 | 875 |
45,405,351 | 2017-07-30T22:31:00.000 | 1 | 0 | 1 | 0 | windows,python-3.x | 45,405,403 | 2 | true | 0 | 0 | Off the top of my head, there are at least two ways to do this:
You could make the script create an empty file in a specific location, and the other script could check for that. Note that you might have to manually remove the file if the script exits uncleanly.
You could list all running processes, and check if the first one is among those processes. This is somewhat more brittle and platform-dependant.
An alternative hybrid strategy would be for the script to create the specific file and write it's PID (process id) to it. The runner script could read that file, and if the specified PID either wasn't running or was not the script, it could delete the file. This is also somewhat platform-dependant. | 1 | 1 | 0 | I have a python script called speech.pyw. I don't want it showing up on the screen when run so I used that extension.
How can I check using another python script whether or not this script is running? If it isn't running, this script should launch it. | Checking for a running python process using python | 1.2 | 0 | 0 | 417 |
45,405,916 | 2017-07-31T00:07:00.000 | 1 | 0 | 1 | 0 | python,mesh,fipy | 45,451,493 | 1 | true | 0 | 0 | I see. I made a desperate try - copied the gmsh.exe file into ..Anaconda2/Scripts/ and it did the job!
I think the fipy documentation should mention this. The chapter on mesh generation only says that you need gmsh, but does not specify that the application (.exe) has to be in the directory with python modules. But this is not quite intuitive (it is not a python file, not installed by pip, just a downloaded application from the web) and yet it is essential for running it. | 1 | 2 | 0 | I am new to fipy, so excuse my ignorance if I ask something that should be obvious. But I fail to run an already existing (and working - on other machines) script, getting the EnvironmentError: Gmsh version must be >= 2.0. I may have missed something during the installation? Fipy manual is a bit vague about implementation of gmsh. It only provides the link to download file and that meshes can be generated by it, but it doesn't say WHERE gmsh should be installed (so I have my gmsh-3.0.3-Windows in the default, i.e. in Program Files (x86). Should it be installed in some particular directory, perhaps the same as fipy?
(I really apologise for a stupid question, but this is the case when if you know what question to ask, you already know the answer and don't need to ask at all.)
For completeness, I am running it on Windows 7, Python 2.7 from Anaconda distribution, fipy 3.1.3. | EnvironmentError: Gmsh version must be >= 2.0 | 1.2 | 0 | 0 | 812 |
45,406,471 | 2017-07-31T01:51:00.000 | 0 | 0 | 0 | 0 | python,authentication,login,web-crawler,lxml | 45,406,522 | 1 | true | 1 | 0 | It very much depends on the method of authentication used. If it's HTTP Basic Auth, then you should be able to pass those headers along with the request. If it's using a web page-based login, you'll need to automate that request and pass back the cookies or whatever session token is used with the next request. | 1 | 0 | 0 | I can get html of a web site using lxml module if authentication is not required. However, when it required, how do I input 'User Name' and 'Password' using python? | How to get html using python when the site requires authenticasion? | 1.2 | 0 | 1 | 29 |
45,408,669 | 2017-07-31T06:17:00.000 | 1 | 0 | 0 | 0 | python,sockets,ssl | 45,409,087 | 1 | false | 0 | 0 | SSLv3 is considered insecure and should no longer be used. Because of this many current installation of OpenSSL come without support for SSLv3, i.e. it is not compiled into the library. In this case you get the error about unsupported method if you try to explicitly use it and you get a similar error if the SSL handshake fails because the peer tries to use this locally unsupported TLS version.
Is there any way to fix that?
Don't try to enforce use of SSLv3. Instead use the sane and secure default protocol setting. | 1 | 1 | 0 | I'm trying to make game server emulator (with uses probably SSLv3 for communicating)
And I'm trying to make SSL socket with SSLv3 support
Here is the line with causes problem:context = SSL.Context(SSL.SSLv3_METHOD)
Is resulting with this: ValueError: No such protocol
Additionally i tryed to use SSL.SSLv23_METHOD - works but while client is trying to connect i'm getting this error:
OpenSSL.SSL.Error: [('SSL routines', 'tls_process_client_hello', 'version too low')]
As you can see I'm getting the 'version too low' error, that's why I'm trying to make the SSLv3 server.
Is there any way to fix that? | Python: SSL.Context(SSL.SSLv3_METHOD) = No such protocol | 0.197375 | 0 | 1 | 1,451 |
45,410,078 | 2017-07-31T07:44:00.000 | 0 | 0 | 0 | 0 | python,redis,scrapy | 45,420,788 | 1 | false | 1 | 0 | The pipeline is a different script, yes. In the settings file you can enable the pipeline. A pipeline can be used to store the crawled results in any database you want. | 1 | 0 | 0 | I am using scrapy-redis now, and I am ok with it, and I am success to crawl in different computer by using the same redis server.
But I don't understand how to use the scrapy-redis pipeline properly.
In my understanding, I think I need another script than the spiders to deal with the item in the redis pipeline list, then I can do stuffs like store them into the database.
Do I understand right, do I have to write another script, which is somehow dependent from the spider? | how to use scrapy-redis pipeline? | 0 | 0 | 0 | 489 |
45,410,644 | 2017-07-31T08:18:00.000 | 1 | 0 | 0 | 0 | python,numpy,tensorflow | 45,410,830 | 2 | false | 0 | 0 | Of course there is a real difference. Numpy works on arrays which can use highly optimized vectorized computations and it's doing pretty well on CPU whereas tensorflow's math functions are optimized for GPU where many matrix multiplications are much more important. So the question is where you want to use what. For CPU, I would just go with numpy whereas for GPU, it makes sense to use TF operations. | 1 | 6 | 1 | Is there any real difference between the math functions performed by numpy and tensorflow. For example, exponential function, or the max function?
The only difference I noticed is that tensorflow takes input of tensors, and not numpy arrays.
Is this the only difference, and no difference in the results of the function, by value? | Tensorflow vs Numpy math functions | 0.099668 | 0 | 0 | 2,450 |
45,411,084 | 2017-07-31T08:41:00.000 | 0 | 0 | 0 | 0 | python,django,amazon-web-services,amazon-elastic-beanstalk | 45,495,283 | 1 | true | 1 | 0 | The first time I deployed a Django app to aws eb it took me about ten to twelve hours of work.
AWS eb is not the easiest to work with because of the lack of good documentation.
Still if you understand what's going on it's not that bad. | 1 | 1 | 0 | How long does it take to deploy Python Django applications to AWS-Beanstalk (for a simple social-platform) normally? Developers are giving me estimations 90 to 100 hours of work. It feels like a bit rip off.
Are there responsible, experienced professionals here set up the deployment? | deployment Python Django applications to AWS Beanstalk | 1.2 | 0 | 0 | 46 |
45,412,902 | 2017-07-31T10:05:00.000 | 0 | 1 | 0 | 0 | django,python-social-auth | 45,424,674 | 1 | true | 1 | 0 | If there's a user already logged in in your app, then python-social-auth will associate the social account with that user, if it should be another user, then the first must be logged out. Basically python-social-auth will create new users if there's none logged in at the moment, otherwise it will associate the the social account. | 1 | 0 | 0 | With python social auth, when a user is logged in when he clicks 'Login with Facebook' or similar.
The request.user is not the newly logged in facebook user but the old logged in user.
log in with email [email protected]
log in with facebook email [email protected]
Logged in user (request.user) is still [email protected]
Is this intended behavior?
Is there a way to fix this or should I not present log-in unless he's not logged out? | python social auth, log in without logging out | 1.2 | 0 | 1 | 49 |
45,413,651 | 2017-07-31T10:40:00.000 | 4 | 0 | 0 | 0 | python-3.x,deep-learning,keras | 45,414,023 | 1 | true | 0 | 0 | One way would be to use a Jupyter notebook, load your model in one cell and do continouus predictions in subsequent cells.
Another way is to setup a server with Flask and run predictions against a simple API. | 1 | 3 | 1 | I built and trained my nn and now it is time to make predictions for the given input data. But I don't know the proper way to make fast predictions with the trained nn. What I am currently doing is loading model every time and making predictions on it. I wonder if there is a way to load the model on memory permanently (for a session) and then make predictions. | Keras: How to make fast predictions with trained network? | 1.2 | 0 | 0 | 998 |
45,414,050 | 2017-07-31T10:59:00.000 | 1 | 0 | 1 | 0 | python-3.x,python-multithreading,python-asyncio | 48,280,602 | 2 | true | 0 | 0 | Sure it may make sense.
Asynchronous code in principle runs a bunch of routines in the same thread.
This means that the moment one routine has to wait for input or output (I/O) it will halt that routine temporarily and simply starts processing another routine until it encounters a wait there, etc.
Multi-threaded (or "parallelized" code) runs in principle at the same time on different cores of your machine. (Note that in Python parallel processing is achieved by using multiple processes as pointed out by @Yassine Faris below).
It may make perfect sense to use both in the same program. Use asyncio in order to keep processing while waiting for I/O. Use multi-threading (multi processing in Python) to do, for example, heavy calculations in parallel in another part of your program. | 1 | 2 | 0 | Would it make sense to use both asyncio and threading in the same python project so that code runs in different threads where is some of them asyncio is used to get a sequentially looking code for asynchronous activities?
or would trying to do this mean that I am missing some basic concept on the usage of either threading or asyncio? | using asyncio and threads | 1.2 | 0 | 0 | 5,183 |
45,418,207 | 2017-07-31T14:09:00.000 | 1 | 0 | 1 | 0 | python,pycharm | 45,418,863 | 3 | false | 0 | 0 | No, It is not having PEP 257 checks yet.
Well have a free recommendation for you, use Sublime Text 3 and and its plugins for Python development. It is free, fast and awesome. | 1 | 9 | 0 | I am new using Pycharm and I was not able to find anything that refers to PEP 257 checks in code. I have been using Atom with its specific packages to work with Python and it has managed PEP 257 checks very well. Because of that I would be surprised if a non cheap IDE did not have this feature.
Thanks! | Does Pycharm have Docstring Conventions checks (PEP 257)? | 0.066568 | 0 | 0 | 2,352 |
45,421,302 | 2017-07-31T16:46:00.000 | 0 | 0 | 0 | 1 | python,windows,cmd,icacls | 45,433,548 | 3 | false | 0 | 0 | @Jean-François Fabre gave me the clue:
Quoting my target argument made sense, since it has blanks and, so, quoting is required when calling from cmd. However, it seems python will over-quote.
Thank you all guys for your help!!! | 1 | 1 | 0 | I've used successfully subprocess.check_output to call plenty of windows programs.
Yet, I'm troubled at calling icacls.
By cmd, this works:
cmd>icacls "C:\my folder" /GRANT *S-1-1-0:F
I've tried:
subprocess.check_output(['C:\\Windows\\System32\\icacls.exe','"C:\\my folder"','/GRANT *S-1-1-0:F'],shell=True,stderr=subprocess.STDOUT)
but return code is 123 (according to micrsoft, invalid file name).
I've also tried (which also works from cmd)
subprocess.check_output(['C:\\Windows\\System32\\icacls.exe','"C:/my folder"','/GRANT *S-1-1-0:F'],shell=True,stderr=subprocess.STDOUT)
but return code is also 123.
Any idea? | calling windows' icacls from python | 0 | 0 | 0 | 1,557 |
45,422,064 | 2017-07-31T17:35:00.000 | 0 | 1 | 1 | 0 | python,py2exe | 45,422,227 | 1 | true | 0 | 0 | The generated executable is essentially a packaged combination of the python interpreter and the compiled (.pyc) files, including any imported packages that might be necessary. | 1 | 0 | 0 | I mean while you transfer python to executable, if the following python script has some modules imported, how will any other device will run the executable file if it doesn't have python installed without the modules? | Something that I don't understand about python to executable file | 1.2 | 0 | 0 | 50 |
45,426,565 | 2017-07-31T23:04:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,logging | 45,432,356 | 1 | true | 0 | 0 | I have found a way to avoid the memory leak in the first approach. The key to it is to create the loggers manually by instantiating the Logger class instead of calling logging.getLogger. This way the temporary loggers are not registered in the centralized registry and thus can be garbage collected. | 1 | 2 | 0 | I have some Python code which writes some logs (using the standard logging module). In most cases, I want the logs to simply propagate to upper-level loggers, but sometimes I need to also write the logs to an in-memory stream (e.g. a StringIO) for later retrieval.
I have thought of two approaches to this:
a) Create a new temporary logger instance with a unique name for each run of the code. If needed, attach an additional handler to that logger for my in-memory logging.
The problem here is that, since there is no way of removing loggers in Python, I get an inevitable memory leak.
b) Make a wrapper function for logging that calls logger.log and, if needed, also performs the in-memory logging.
The problem here is that I lose the information about filename and line number at which the logging was performed, as the logging module thinks that it was called from the wrapper.
What would be the best workaround for this problem?
If that makes any difference in this case, I am using Python 2.7.
Thanks in advance! | Python - temporary loggers for selective in-memory logging | 1.2 | 0 | 0 | 561 |
45,428,631 | 2017-08-01T03:47:00.000 | 0 | 0 | 0 | 0 | python,pandas,pandas-groupby | 45,428,708 | 2 | false | 0 | 0 | Yesy, You can use reindex the new dataframe using the reset_index() method. | 1 | 4 | 1 | I have a pandas GroupBy object. I am using head(k) to extract the first k elements of each group into a dataframe and I want to also extract the complement. Each group has a nonconstant size.
Is there any straightforwards way of doing this? | Getting all but the first k rows from a group in a GroupBy object | 0 | 0 | 0 | 433 |
45,431,931 | 2017-08-01T07:49:00.000 | 1 | 0 | 1 | 0 | linux,matlab,python-3.x,anaconda,ubuntu-16.04 | 45,432,298 | 1 | true | 0 | 0 | Did it myself. Just copied the matlab folder which was formed in matlab directory for py2.7 to my anaconda's virtual-env's site-packages.
According to the paths mentioned above in question, you need to do this on linux terminal.
cp /usr/local/MATLAB/R2016a/extern/engines/python/build/lib.linux-x86_64-2.7/matlab /home/fire-trail/anaconda3/envs/py34/lib/python3.4
and it will work with py34 in anaconda.
remember that min requirement for matlab engine in linux is, matlab 2014b and python 2.7
hope this helps other. | 1 | 2 | 1 | I'm trying to get Matlab's python engine to work with my Anaconda installation on Linux. But I'm not quite getting it right.
Anaconda's Python version: 3.6 (created a virtual-env for python 3.4)
Matlab Version: 2016b
Path to matlab root: /usr/local/MATLAB
Path to Anaconda: /home/fire-trail/anaconda3
Virtual env: py34
I installed matlab engine via official documentation from mathworks but it installs it in the default Linux Python installation and that too in Python 2.7
I want Anaconda 3.4 virtual env (py34) to find matlab engine. | Linux[Ubuntu 16.04]-Installing MATLAB engine for Anaconda Python3 | 1.2 | 0 | 0 | 594 |
45,437,357 | 2017-08-01T12:05:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,reinforcement-learning,openai-gym | 45,468,144 | 2 | false | 0 | 0 | No, OpenAI Gym environments will not provide you with the information in that form. In order to collect that information you will need to explore the environment via sampling: i.e. selecting actions and receiving observations and rewards. With these samples you can estimate them.
One basic way to approximate these values is to use LSPI (least square policy iteration), as far as I remember, you will find more about this in Sutton too. | 1 | 3 | 1 | I am currently reading "Reinforcement Learning" from Sutton & Barto and I am attempting to write some of the methods myself.
Policy iteration is the one I am currently working on. I am trying to use OpenAI Gym for a simple problem, such as CartPole or continuous mountain car.
However, for policy iteration, I need both the transition matrix between states and the Reward matrix.
Are these available from the 'environment' that you build in OpenAI Gym.
I am using python.
If not, how do I calculate these values, and use the environment? | Implementing Policy iteration methods in Open AI Gym | 0.197375 | 0 | 0 | 1,540 |
45,437,617 | 2017-08-01T12:19:00.000 | 0 | 0 | 1 | 1 | python,linux,ubuntu,gurobi | 45,439,246 | 1 | false | 0 | 0 | You can test gurobipy via the command from gurobipy import *. If that gives no error, then the installation worked, and you can ignore the messages from running setup.py. | 1 | 0 | 0 | I can not install the Python package Gurobipy. I get the following output when I try it:
running install
running build running build_py running install_lib running
install_egg_info Removing
/usr/local/lib/python2.7/dist-packages/gurobipy-7.5.1.egg-info Writing
/usr/local/lib/python2.7/dist-packages/gurobipy-7.5.1.egg-info
removing 'build/lib.linux-x86_64-2.7' (and everything under it)
'build/bdist.linux-x86_64' does not exist -- can't clean it
'build/scripts-2.7' does not exist -- can't clean it removing 'build'
I run Ubuntu 16.04, Python 2.7, and Gurobi 7.5.1. gurobi.sh is working fine... | Install gruobipy package - error | 0 | 0 | 0 | 1,240 |
45,440,820 | 2017-08-01T14:37:00.000 | 0 | 0 | 1 | 0 | python,django,naming-conventions | 45,441,741 | 1 | false | 1 | 0 | That's a standard behaviour and NOT a bug. The underscore in module or package names is legal, you are thus you may put underscores in these names.
In another hand having a Clockwise_Counter_Config class is somehow ugly (but it works).
Ah, and the INSTALLED_APPS settings is a list (or sequence) of apps package dotted names, and not of class names. | 1 | 1 | 0 | PEP 8 says "Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability. Python packages should also have short, all-lowercase names, although the use of underscores is discouraged."
I am working on an educational program which could, ultimately, have many lessons. Sometimes one app could deliver thousands of lessons (i.e. spelling), but still, there could be many apps. Underscores in lower case names for readability seems essential.
When I use an underscore in an app name, such as clockwise_counter, "python manage.py startapp clockwise_counter" removes the underscore in constructing the class name in apps.py. The class name becomes
"class ClockwiseCounterConfig(AppConfig):
name = 'clockwise_counter'"
This caused me a lot of confusion until I learned to copy the apps.py class name into the INSTALLED_APPS section of settings.py by removing the underscore.
My Questions are:
Why are the underscores discouraged in PEP 8? Is there really a good reason or was it just personal preference sometime in the past.
Am I likely to have problems using underscores in app names for readability now or in the future? | Django app.py removes underscore in Config | 0 | 0 | 0 | 399 |
45,443,395 | 2017-08-01T16:43:00.000 | 2 | 0 | 0 | 0 | python,excel,number-formatting,win32com | 45,443,851 | 1 | true | 0 | 0 | Pass a single leading quote to Excel ahead of the number, for example "'5307245040001" instead of "5307245040001" | 1 | 0 | 0 | I am trying to generate a report in excel using win32com. I can get the information into the correct cells. However, one of my columns contains an ID number, and excel is formatting it as a number (displaying it in scientific notation). I have tried formatting the cell as text using sheet.Range(cell).NumberFormat = '@', which works, but will only update after the cell has been selected in the actual excel file. The same thing happens whether I format the cell before or after entering the data. Is there a way to refresh the cell formatting using win32com? I want the ID numbers to display correctly as soon as the com instance is made visible. | Formatting does not automatically update when using excel with win32com | 1.2 | 1 | 0 | 328 |
45,443,475 | 2017-08-01T16:47:00.000 | 0 | 1 | 1 | 0 | python,function,methods,getattr | 45,451,015 | 1 | false | 0 | 0 | Answered my own question. I used if statements to handle groups of attributes, and then as a last resort, used return getattr(my_cls_name, attr) to allow any other builtin function to run normally. So now when it realizes there is no __iadd__ handler, it uses my __add__ instead. same with the other operators. | 1 | 1 | 0 | I have successfully overridden the __getattr__ method in order to allow for complex behavior. However I still would like to allow some of the __builtin__ functions to be default. For example, I have used __getattr__ to handle __add__, __sub__, __mul__, etc. However, the __iadd__, __isub__, __imul__, etc. are trying to use the __getattr__ method and are throwing an error. I could just define their behaviors as well, but I think that allowing these methods to run as default would be better.
Bottom line: I would like to allow __getattr__ to filter which attributes it handles, and which it allows to run as default. | python: selectively override __getattr__ | 0 | 0 | 0 | 136 |
45,444,964 | 2017-08-01T18:12:00.000 | 21 | 0 | 0 | 0 | python,gensim,word2vec | 45,453,040 | 2 | true | 0 | 0 | size is, as you note, the dimensionality of the vector.
Word2Vec needs large, varied text examples to create its 'dense' embedding vectors per word. (It's the competition between many contrasting examples during training which allows the word-vectors to move to positions that have interesting distances and spatial-relationships with each other.)
If you only have a vocabulary of 30 words, word2vec is unlikely an appropriate technology. And if trying to apply it, you'd want to use a vector size much lower than your vocabulary size – ideally much lower. For example, texts containing many examples of each of tens-of-thousands of words might justify 100-dimensional word-vectors.
Using a higher dimensionality than vocabulary size would more-or-less guarantee 'overfitting'. The training could tend toward an idiosyncratic vector for each word – essentially like a 'one-hot' encoding – that would perform better than any other encoding, because there's no cross-word interference forced by representing a larger number of words in a smaller number of dimensions.
That'd mean a model that does about as well as possible on the Word2Vec internal nearby-word prediction task – but then awful on other downstream tasks, because there's been no generalizable relative-relations knowledge captured. (The cross-word interference is what the algorithm needs, over many training cycles, to incrementally settle into an arrangement where similar words must be similar in learned weights, and contrasting words different.) | 2 | 9 | 1 | I have been struggling to understand the use of size parameter in the gensim.models.Word2Vec
From the Gensim documentation, size is the dimensionality of the vector. Now, as far as my knowledge goes, word2vec creates a vector of the probability of closeness with the other words in the sentence for each word. So, suppose if my vocab size is 30 then how does it create a vector with the dimension greater than 30? Can anyone please brief me on the optimal value of Word2Vec size?
Thank you. | Python: What is the "size" parameter in Gensim Word2vec model class | 1.2 | 0 | 0 | 14,983 |
45,444,964 | 2017-08-01T18:12:00.000 | 0 | 0 | 0 | 0 | python,gensim,word2vec | 65,432,085 | 2 | false | 0 | 0 | It's equal to vector_size.
To make it easy, it's a uniform size of dimension of the output vectors for each word that you trained with word2vec. | 2 | 9 | 1 | I have been struggling to understand the use of size parameter in the gensim.models.Word2Vec
From the Gensim documentation, size is the dimensionality of the vector. Now, as far as my knowledge goes, word2vec creates a vector of the probability of closeness with the other words in the sentence for each word. So, suppose if my vocab size is 30 then how does it create a vector with the dimension greater than 30? Can anyone please brief me on the optimal value of Word2Vec size?
Thank you. | Python: What is the "size" parameter in Gensim Word2vec model class | 0 | 0 | 0 | 14,983 |
45,446,829 | 2017-08-01T20:08:00.000 | 0 | 0 | 0 | 0 | python,caffe | 45,447,380 | 1 | true | 0 | 0 | You are confusing test and validation sets. A validation set is a set where you know the labels (like in training) but you do not train on it. The validation set is used to make sure you are not overfitting the training data.
At test time you may present your model with unlabeled data and make prediction for these samples. | 1 | 1 | 1 | During the process of making a lmdb file,we are supposed to make a train.txt and val.txt file,i have already made a train.txt file which consists of the image name space its corresponding label.Ex image1.JPG 0.
Now that i have to make the val.txt file im confused as to how do i give it its corresponding values since it is my test data and i am hoping to predict those.Can anyone tell me what thisval.txt file is and what is it supposed to be doing. | Caffe LMDB train and val.txt | 1.2 | 0 | 0 | 427 |
45,450,706 | 2017-08-02T02:57:00.000 | 4 | 0 | 0 | 0 | python,opencv,ubuntu-14.04 | 45,450,859 | 2 | false | 0 | 0 | libopencv is the debian/ubuntu package while python-opencv is the python wrapper and can be accessed using cv2 interface like COLDSPEED mentioned | 1 | 9 | 1 | I'm new to opencv and using ubuntu 14.04, I'm confused of the difference with opencv, python-opencv, and libopencv, as I have libopencv and python-opencv installed in my system, but I there is no cv interface accessible, so I have to install opencv which is much hard than python-opencv and libopencv. | What's the difference with opencv, python-opencv, and libopencv? | 0.379949 | 0 | 0 | 16,239 |
45,453,243 | 2017-08-02T06:38:00.000 | 1 | 0 | 1 | 1 | python,linux,macos,root,file-permissions | 45,453,549 | 1 | false | 0 | 0 | Try chmod 777 filname.py, this will give the file all rights for execution and editing. There are also other modes for chmod like 755, which will also work for your case. | 1 | 0 | 0 | My Python program uses the terminal(system) command to perform the task on the file and script. I am going to convert this Python program into Mac OS application and Linux application using pyinstaller. I am going to pass the application installer file to my friends. However, I have following questions.
If script or file doesn't have proper permission which my program is trying to access, will Python get an error?
Running some script or opening file will require the permission of root. So is there a possible option that will prompt the user for root(admin) password or run my application with root privilege?
Thanks | Python application permission denied | 0.197375 | 0 | 0 | 1,860 |
45,455,892 | 2017-08-02T08:47:00.000 | 0 | 0 | 0 | 0 | python,windows,xlwings | 45,456,886 | 1 | false | 0 | 0 | The add-in replaces the need for the settings in VBA in newer versions.
One can debug the xlam module using "xlwings" as a password.
This enabled me to realize that the OPTIMIZED_CONNECTION parameter is now set through "USE UDF SERVER" keyword in the xlwings.conf sheet (which does work) | 1 | 0 | 0 | I would like to use xlwings wit the OPTIMIZED_CONNECTION set to TRUE. I would like to modify the setting but somehow cannot find where to do it. I change the _xlwings.conf sheet name in my workbook but this seems to have no effect. Also I cannot find these settings in VBA as I think I am supposed to under what is called "Functions settings in VBA module" in the xlwings documentation. I tried to re-import the VBA module but cannot find xlwings.bas on my computer.(only xlwings.xlam that I cannot access in VBA)
I am using the 0.11.4 version of xlwings.
Sorry for this boring question and thanks in advance for any help. | xlwings VBA function settings edit | 0 | 1 | 0 | 702 |
45,462,207 | 2017-08-02T13:26:00.000 | 0 | 0 | 0 | 0 | python,numpy,neural-network | 45,462,707 | 1 | true | 0 | 0 | Neural network weights are just data. You can store this any way you like along with the distributed application. As you have used Numpy to create the weights and biases array, you can probably just use pickle - add save_network function or similar name (used in training program only) and load_network function. If your weights and biases are just a bunch of local variables, you will want put them into a structure like a dict first. | 1 | 0 | 1 | I have developed a desktop application. It includes a neural network as a part of the application. Now I'm confused what to do after training. Can I make an executable out of it as usual method?
Please someone explain what should do, because I have no idea how to pass this milestone. I've tried searching neural network tutorials. But none of them helped me with this problem.
If someone wants to know, I have used numpy and openCV only. | How to deploy a desktop application that includes a neural network? | 1.2 | 0 | 0 | 248 |
45,463,581 | 2017-08-02T14:25:00.000 | 1 | 0 | 1 | 0 | python,windows-7,virtualenv | 45,463,629 | 2 | true | 0 | 0 | You can do virtualenv -p "path to python executable(whichever you want)" | 2 | 1 | 0 | I have a Python installation in C:\ProgramData\Anaconda2 and C:\ProgramData\Anaconda3. I would like to create a virtual environment using base Python (not anaconda) in C:\ProgramData. My question is two fold.
Can I use a python instance as the base for the new env that has not been installed? I.e. A clean version of base Python without Anaconda? Or, do I have to download and install that first in a third directory and then use that?
Can I specify which instance of python to use as the base when setting up the env? I.e from directory C:\ProgramData\ >> $ virtualenv my_project --C:\ProgramData\Python27? So in this example the new virtual environment would be created in C:\ProgramData\My_Project and use the clean base version of python instead of the Anaconda 2 or 3 distribution?
Thank you in advance. | Virtual ENV Specify Python Instance Used | 1.2 | 0 | 0 | 49 |
45,463,581 | 2017-08-02T14:25:00.000 | 1 | 0 | 1 | 0 | python,windows-7,virtualenv | 45,463,659 | 2 | false | 0 | 0 | Can I specify which instance of python to use as the base when setting up the env?
Of course, just run virtualenv -p P:\ath\to\python.exe
As for your other question -- the python install which you want to use has to exist locally, afaik. So you'd have to install python first, if you don't want to use the version provided by anaconda. | 2 | 1 | 0 | I have a Python installation in C:\ProgramData\Anaconda2 and C:\ProgramData\Anaconda3. I would like to create a virtual environment using base Python (not anaconda) in C:\ProgramData. My question is two fold.
Can I use a python instance as the base for the new env that has not been installed? I.e. A clean version of base Python without Anaconda? Or, do I have to download and install that first in a third directory and then use that?
Can I specify which instance of python to use as the base when setting up the env? I.e from directory C:\ProgramData\ >> $ virtualenv my_project --C:\ProgramData\Python27? So in this example the new virtual environment would be created in C:\ProgramData\My_Project and use the clean base version of python instead of the Anaconda 2 or 3 distribution?
Thank you in advance. | Virtual ENV Specify Python Instance Used | 0.099668 | 0 | 0 | 49 |
45,464,559 | 2017-08-02T15:07:00.000 | -2 | 0 | 0 | 0 | python,rest,alpha-vantage | 45,515,124 | 1 | true | 0 | 0 | I soon found out that Alpha Vantage doesn't support this, so I created a scraper instead to use on another website to get the information with one request, it isn't really fast, but that doesn't really matter that much right now since it will be rendered with ajax on a frontend framework, but later it should be optimized. | 1 | 0 | 0 | I'm currently working on a stockdashboard for myself using the alpha vantage REST API. My problem is that I want to get many stockprices from a list of tickers that I have, without using many requests to get all the prices from all the stocks. And also limiting the information I get from each stock to just being the stockprice for each stock. How would I query the alpha vantage api to not overload their servers with requests? | Alpha vantage Multiple stockprices, few requests | 1.2 | 0 | 1 | 1,394 |
45,464,760 | 2017-08-02T15:16:00.000 | 0 | 0 | 1 | 0 | python,sparse-matrix,networkx | 45,505,004 | 1 | false | 0 | 0 | NetworkX uses a sparse representation; and read_edgelist reads the file line by line (ie. it does not load the whole file at once).
So if NetworkX uses too much memory, it means this is actually what it takes to represent the whole graph in memory.
A possible solution is to read the file yourself and discard as much edges as possible before feeding it to NetworkX. | 1 | 0 | 0 | I use read_edgelist function of networkx to read a graph's edges from a file(500Mb), G(nodes= 2.3M, edges=33M), it uses the whole memory of machine and seems nothing it does after not finding more memory to load whole graph.
Is there any way to handle this problem like sparse graph solution or using other libraries? | Is there any way to handle memory usage in read_edgelist of networkx in python? | 0 | 0 | 1 | 128 |
45,467,330 | 2017-08-02T17:24:00.000 | 1 | 1 | 0 | 0 | python,debian,packages,flasgger | 45,478,342 | 1 | true | 0 | 0 | If swagger is the dependency of the software you package and it is not available in Debian, You'll need to package it before.
If swagger is a dependency for test suite only, you may consider modifying or disabling the test.py by creating a patch in d/patches for example.. | 1 | 0 | 0 | Following the guidelines to build Debian package from a Python files powered by Flassger.
When running a build getting an error:
ImportError: No module named swagger_spec_validator.util
Which mean, that test.py doesn't see swagger_spec_validator.
There seem to be no Swagger related pakcages for Debian at all. Should the swagger_spec_validator be included somewhere in debian/control file? | Python into Debian package: No module named error | 1.2 | 0 | 0 | 164 |
45,468,503 | 2017-08-02T18:33:00.000 | 1 | 0 | 0 | 0 | python,macos,python-2.7,tkinter,activetcl | 45,489,621 | 3 | false | 0 | 1 | I don't know what is meant by "new distribution" for ActiveTcl is but if you're using 8.6, it needs to be downgraded to 8.5.
Also, if you run IDLE which uses Tkinter, do you see any messages warning of "instability"? If you see that then, it means you need to downgrade Tcl to 8.5. | 1 | 2 | 0 | I've encountered a problem while attempting to create a Tkinter window using root = tk.Tk() . Every time I get to that point, the program crashes, and "Python quit unexpectedly" message is displayed.
I get no tracebacks at all, so I assume that is the ActiveTcl bug. However, I have the new distribution from ActiveTcl Website installed, which is supposed to take care of the problem (obviously, it doesn't).
Interestingly enough, it only crashes when it is executed in Python 2.7. It works fine in Python 3.6. However, I need to use 2.7.
My MacOS version is 10.12.5.
Any ideas / suggestions about fixing the issue are welcome.
P.S. I have read a good dozen of similar posts before posting this, and not any of the proposed solutions worked for me. Please consider this before marking this post as a duplicate. | Tkinter keeps crashing on Tk() on Mac | 0.066568 | 0 | 0 | 1,652 |
45,469,494 | 2017-08-02T19:33:00.000 | 0 | 0 | 1 | 0 | python,anaconda | 45,469,576 | 1 | false | 0 | 0 | try lowercasing quandl (import quandl), if that doesn't work try reinstalling quandl as a lowercase and then doing import quandl | 1 | 0 | 0 | I'm running python 3.6 in Anaconda. I've run pip install quandl in command prompt and now i'm trying to import Quandl, it gives me this error:
import Quandl
Traceback (most recent call last):
File "", line 1, in
import Quandl
ModuleNotFoundError: No module named 'Quandl'
I've checked the Anaconda program file and the quandl file is there. Any idea why it can't see the quandl file. | Can't import Quandl | 0 | 0 | 0 | 677 |
45,473,744 | 2017-08-03T02:18:00.000 | 3 | 1 | 0 | 1 | python,linux | 45,473,958 | 2 | true | 0 | 0 | When you run apt purge <package> or apt remove <package> you are not only instructing apt to remove the named package, but any other package that depends on it. Of course apt doesn't perform that unexpected operation without first asking for your consent, so I imagine it showed the list of packages that it was going to remove, and when you pressed Y it removed all of them.
So, to undo the mess, if you still have the window where you run the purge then check which packages it told you it was going to remove, and manually apt install them. If you don't have the list around, then you need to manually install every package that is not working properly.
If it is the window manager that got damaged, try apt-get install ubuntu-gnome-desktop or the appropriate package for your distribution/window manager.
Rule of thumb when deleting/updating packages: always read the list of packages affected, sometimes there is unexpected stuff. | 1 | 0 | 0 | Newbie to linux, I thought that apt-get purge is usually used to remove a pkg totally, but today it neally crash my whold system. I want to remove a previously installed python 3.4 distribution, but I'm not sure which pkg it belongs, so I used find /usr -type f -name "python3.4" to find it, the command returns several lines, the first one is /usr/bin/python3.4, so then I typed dpkg -S /usr/bin/python3.4 to determine which pkg python3.4 belongs, it returns python-minimal, so I typed sudo apt-get purge python-minimal, but then a lot of pkgs was removed, also some installed, I'm totally confused, and I saw even the app store disappeared, a lot of the system was removed... Can someone help me? | What did apt-get purge do? | 1.2 | 0 | 0 | 3,227 |
45,475,587 | 2017-08-03T05:31:00.000 | 0 | 0 | 1 | 0 | python,binary,complement | 45,475,668 | 1 | false | 0 | 0 | You can use the ~ operator. If A = 00100, ~A = 11011.
If A is a string version of a decimal, convert it into int first. | 1 | 0 | 0 | all.
I want to change 1 to 0 and 0 to 1 in binary.
for example,
if binary is 00000110.
I want to change it to 11111001.
how to do that in python3?
Best regards. | how to change 1 to 0 and 0 to 1 in binary(Python) | 0 | 0 | 0 | 865 |
45,477,805 | 2017-08-03T07:39:00.000 | 0 | 0 | 0 | 0 | android,python,tkinter,termux | 54,987,146 | 3 | false | 0 | 1 | You can't install tkinter or any graphical library or framework on termux because termux doesn't have a GUI and relevant graphical headers. | 1 | 0 | 0 | I installed termux on my android device (Pixel C), and successfully installed python 3.6.2 there, and after downloaded (with pip) some libraries like pillow (there were some problems, but with online forums I solved it), vk, etc.
Tkinter should be preinstalled on python, but it wasn't (like some other modules like time, random etc.).
All this modules - tkinter, that should be preinstalled, are not there - and it is not possible to install them.
pip install tkinter
->Could not find a version that satisfies the requirement time (from versions: )
No matching distribution found for tkinter.
if I try with:
apt-get install python3-tk
Still nothing - error placing file.
apt-get update and apt upgrade didn't help... | I can't install tkinter on termux | 0 | 0 | 0 | 8,845 |
45,478,335 | 2017-08-03T08:05:00.000 | 0 | 0 | 0 | 0 | apache-kafka,message-queue,offset,kafka-python,sequential-workflow | 45,479,067 | 2 | false | 0 | 0 | In java kafka client, there is some methods about kafka consumer which could be used to specified next consume position.
public void seek(TopicPartition partition,
long offset)
Overrides the fetch offsets that the consumer will use on the next poll(timeout). If this API is invoked for the same partition more than once, the latest offset will be used on the next poll(). Note that you may lose data if this API is arbitrarily used in the middle of consumption, to reset the fetch offsets
This is enough, and there are also seekToBeginning and seekToEnd. | 1 | 0 | 0 | I know about configuring kafka to read from earliest or latest message.
How do we include an additional option in case I need to read from a previous offset?
The reason I need to do this is that the earlier messages which were read need to be processed again due to some mistake in the processing logic earlier. | How to configure kafka such that we have an option to read from the earliest, latest and also from any given offset? | 0 | 0 | 0 | 412 |
45,478,674 | 2017-08-03T08:21:00.000 | 2 | 1 | 1 | 0 | python,json,unicode | 45,478,864 | 1 | false | 0 | 0 | This is not a problem. It is the correct encoding for non-ASCII text in JSON. | 1 | 0 | 0 | If I pass text through python cyrillic text turns to \u****
There are a lot of ways to handle it inside py-scripts but I also use python as handly json formatter (json.tool)
I tried CMD.exe, powershell and MINGW bash | python json.tool and unicode escape | 0.379949 | 0 | 0 | 1,082 |
45,478,974 | 2017-08-03T08:35:00.000 | 1 | 0 | 0 | 0 | python-3.x,komodo,komodoedit | 45,487,880 | 1 | false | 1 | 0 | You just need to update the file association under Preferences > File Associations. | 1 | 0 | 0 | I just recently started using Komodo Edit 10 and used an existing Django project which uses Python 3.6.x. I created a Komodo project (.komodoproject file) for it and updated the Projects > Project Preferences > Languages > Python 3 > Use this interpreter to point to my conda virtual environment. I've also added the site-packages directory to the "Additional Python 3 Import Directories" and so I was expecting code completion to work.
Now, when I open a .py file, like models.py, and start typing from dj, no code completion is done (I was expecting to get django drop-down). Then I noticed that in the open file, there's a drop-down to change the file type of the file (upper-right corner of editor). I changed it to "Python 3" and now completion works (yey!). So then I proceed to open the views.py file expecting code completion to work but it wouldn't, and I had to set the file type to "Python 3" before it worked.
Now, my question is if there's a way to batch change the file type for all .py files inside the project from "Python 2" to "Python 3"? Or do I have to tediously change the file type for each .py file manually? | How to set all .py files in Komodo Edit project to be recognized as Python 3 instead of Python 2? | 0.197375 | 0 | 0 | 197 |
45,479,246 | 2017-08-03T08:47:00.000 | 2 | 0 | 0 | 0 | python,ssl,connection | 53,810,560 | 1 | true | 0 | 0 | I have a pretty much of an idea what you are trying to do with your code.
The error you are getting is not neither from the python module nor it's an OS error - it says that the website does not use CA certificates which requests module use to request and fetch data
The requests modules fails to make successful handshake - it can be possible that you are not allowed to access that website or the website has its own custom security and bots that not working
Fortunately you can create connection pools (proxies) or use httpadapters. requests module can also be used to verify certificates so you can try a bunch of things to make it work. | 1 | 1 | 0 | I am using a package in python , i am trying to access methods with simple print i come up with this error
ConnectionError: HTTPSConnectionPool(host='xxx.xxxxx.xxx', port=443): Max retries exceeded with url: /?gfe_rd=cr&ei=DeCCWZWAKajv8werhIGAAw (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)'),))
is it python error or os level error
please help me in it
thanks in advance | ConnectionError: HTTPSConnectionPool SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)'),)) | 1.2 | 0 | 1 | 3,035 |
45,479,510 | 2017-08-03T08:59:00.000 | 1 | 0 | 1 | 1 | python,ocaml,integration | 45,495,926 | 3 | false | 0 | 0 | As a complementary remark, it is perfectly possible to run a separated toplevel process and send to it input phrases and read the corresponding output. The trick to detect the end of a toplevel output is to add a guard phrase after every input phrase: rather than sending just f ();; to the toplevel process, one can send f ();; "end_of_input";; and then watch for the toplevel output corresponding to "end_of_input";; (aka - : string = "end_of_input"). My experience is that errors and warnings are generally quite easy to detect or parse from the toplevel output; so the only missing point is the formatting of the code. | 1 | 3 | 0 | I would like a python GUI to have an OCaml process in the background. I would like to keep a single session throughout the program's lifetime, and depending on user inputs, call some OCaml commands and retrieve OCaml's output. Some OCaml variables and structures may be defined along the way so I would like to maintain a single ongoing session.
My solution was to hold an OCaml toplevel process using popen and interact with its stdin and stdout. This works purely for me for several reasons:
1. I don't know when is the OCaml calculation done and can't tell if it's output is complete or there is more to come (especially so if the evaluation takes some time, and if multiple OCaml commands were invoked).
2. I have no inherent way of telling whether the OCaml command ran smoothly or maybe there were OCaml warnings or errors.
3. I lose the structure of OCaml's output. For example, if the output spreads over several lines, I can't tell which lines were broken due to line size, and which were originally separate lines.
I know there are some discussions and some packages for combining python with OCaml, but they all run python commands from OCaml, and I need the opposite. | Integrating OCaml in python - How to hold an ocaml session from python? | 0.066568 | 0 | 0 | 345 |
45,482,450 | 2017-08-03T11:07:00.000 | 1 | 0 | 1 | 1 | python-3.x,docker,jupyter-notebook | 51,353,255 | 1 | false | 0 | 0 | This error is due to compile errors in the respective notebook.
To try, I commented everything and just annotated a single cell with a GET request.
It worked! | 1 | 1 | 0 | When I try to execute Jupyter Kernel Gateway on Docker, I get the below error:
2017-08-03T11:00:51.732015249Z [KernelGatewayApp] Kernel shutdown: 27351426-2078-4101-b3f3-86da41d6e141
2017-08-03T11:00:51.735665285Z Traceback (most recent call last):
2017-08-03T11:00:51.735690921Z File "/opt/conda/bin/jupyter-kernelgateway", line 11, in
2017-08-03T11:00:51.735699387Z sys.exit(launch_instance())
2017-08-03T11:00:51.735705691Z File "/opt/conda/lib/python3.4/site-packages/jupyter_core/application.py", line 267, in launch_instance
2017-08-03T11:00:51.735711902Z return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
2017-08-03T11:00:51.735717618Z File "/opt/conda/lib/python3.4/site-packages/traitlets/config/application.py", line 591, in launch_instance
2017-08-03T11:00:51.735723686Z app.initialize(argv)
2017-08-03T11:00:51.735731330Z File "/opt/conda/lib/python3.4/site-packages/kernel_gateway/gatewayapp.py", line 212, in initialize
2017-08-03T11:00:51.735737468Z self.init_configurables()
2017-08-03T11:00:51.735742836Z File "/opt/conda/lib/python3.4/site-packages/kernel_gateway/gatewayapp.py", line 241, in init_configurables
2017-08-03T11:00:51.735748923Z self.kernel_pool = KernelPool(self.prespawn_count, self.kernel_manager)
2017-08-03T11:00:51.735755996Z File "/opt/conda/lib/python3.4/site-packages/kernel_gateway/services/kernels/pool.py", line 27, in init
2017-08-03T11:00:51.735762895Z kernel_id = kernel_manager.start_kernel(kernel_name=self.kernel_manager.parent.seed_notebook['metadata']['kernelspec']['name'])
2017-08-03T11:00:51.735772782Z File "/opt/conda/lib/python3.4/site-packages/kernel_gateway/services/kernels/manager.py", line 71, in start_kernel
2017-08-03T11:00:51.735779471Z raise RuntimeError('Error seeding kernel memory')
2017-08-03T11:00:51.735785063Z RuntimeError: Error seeding kernel memory | RuntimeError in Jupyter Kernel Gateway on Docker | 0.197375 | 0 | 0 | 398 |
45,482,486 | 2017-08-03T11:09:00.000 | 2 | 0 | 0 | 0 | python,django,django-rest-framework | 45,482,515 | 2 | false | 1 | 0 | serializer.errors has the details of the validation errors that occured | 1 | 5 | 0 | I'm using Django with Django-Rest-Framework. If I run the serializer.is_valid() function I get a False result. How can I show the reason for this result? | How to show mistakes with Django Serializer Validation? | 0.197375 | 0 | 0 | 312 |
45,483,128 | 2017-08-03T11:39:00.000 | 1 | 0 | 0 | 0 | android,python-3.x,sqlite,kivy | 45,489,681 | 1 | true | 0 | 1 | Just include the database file in the apk, as you would any other file. | 1 | 1 | 0 | So i am writing a python3 app with kivy and i want to have some data stored in a database using sqlite.
The user needs to have access to that data from the first time he opens the app
Is there a way to possibly make it so that when i launch the app, the user that downloads it, will already have the data i stored, like distribute the database along with the app? so that i don't have to create it for every user.
I have searched here and there but haven't found an answer yet
Thank you in advance | Android-How can i attach SQLite database in python app | 1.2 | 1 | 0 | 310 |
45,484,521 | 2017-08-03T12:41:00.000 | 2 | 0 | 0 | 0 | python,django,django-views | 45,485,423 | 1 | false | 1 | 0 | Create your own decorator that will validate the logged in user before going into the view and show the content accordingly. Put a check in the template that the logged in user is requesting the page. You can use both @login_required and your own decorator on your view. | 1 | 1 | 0 | I'm currently developping a micro_blog to learn Django, and i didn't find the way to make a page only visible by one User.
ex: i want that each user got a private page on /profile/username/private .
How sould i do to make that only the user "username" got access to it?
For the moment, every-user can access to this page by typing the url.
I already put the " @login_required " on head of my func, but logged user can still access to the page.
Hope that you'll understand my issue,
Kind Regards,
[UPDATE]
I successfully made it by comparing the name of the current connected user with the name of the owner private page. As each username is unique in the DataBase, the way i did it works. | Django - Make private page by user | 0.379949 | 0 | 0 | 965 |
45,485,455 | 2017-08-03T13:21:00.000 | 0 | 0 | 0 | 0 | python,cx-freeze | 45,579,818 | 1 | false | 0 | 0 | How about trying to use IExpress you can definitely put a licence with it.
If you run Windows it's definitely worth a go since it's installed by default.
Otherwise you could always build an exe create a GUI with it, all the options and then launch the msi afterwards. For different options you could have a different msi. | 1 | 2 | 0 | I Want my msi created by cx_freeze to select options from the given list with a user interface (As it is in licence agreement).
How can I achieve it? | msi created by cx_freeze to select option from the given user interface(like licence agreement) | 0 | 0 | 0 | 54 |
45,487,397 | 2017-08-03T14:38:00.000 | -1 | 0 | 1 | 0 | python,pip,cherrypy | 45,493,774 | 1 | false | 0 | 0 | Ended up copying my entire lib\site-packages folder to the remote server, placed where it would have been on my old server, and it worked fine.
TL:DR copy you %Python_home%/lib/site-packages folder to your remote machine and it might work. need to have the same version of python installed. In my case it was 2.7. | 1 | 0 | 0 | Good Morning everyone, I am attempting to install CherryPy on a server without internet access. It has windows Server 2012. I can RDP to it, which is how i have attempted to install it. The server has Python 2.7 installed.
What I have tried (unsuccessfully):
RDP to the server, pip install cherrypy from command line (issue is that it is offline)
Downloaded the .grz files, RDP to server, from CL ran python (source to the setup.py file) install. says that there are dependencies that are unable to be downloaded (because offline).
Downloaded the whl file, attempted to run, did not work.
Is there a way to download the the package, along with all dependencies, on a remote computer (with internet access) and them copy the files over and install? I have attempted to find this information without success.
thank you all for your help. | Unable to install cherrypy on an offline server | -0.197375 | 0 | 0 | 259 |
45,488,558 | 2017-08-03T15:26:00.000 | 4 | 0 | 0 | 0 | python,scikit-learn | 48,011,076 | 2 | false | 0 | 0 | Very simple actually: model.loss_curve_ gives you the values of the loss for each epoch. You can then easily plot the learning curve by putting the epochs on the x axis and the values mentioned above on the y axis | 1 | 5 | 1 | I am using an MLPRegressor to solve a problem and would like to plot the loss function, i.e., by how much the loss decreases in each training epoch. However, the attribute model.loss_ available for the MLPRegressor only allows access to the last loss value. Is there any possibility to access the whole loss history? | Loss history for MLPRegressor | 0.379949 | 0 | 0 | 7,290 |
45,489,990 | 2017-08-03T16:34:00.000 | -1 | 0 | 1 | 0 | python,pycharm,keyboard-shortcuts | 45,490,274 | 2 | false | 0 | 0 | While you run your code in debugging mode use inline debugging functionality. This should lets you view the value of variables used in your source code right next to their usage, without having to switch to the Variables pane of the Debug tool window.
Enabling inline debugging
To enable it in Debug tool window toolbar click the Settings icon -> select the Show Values Inline option from the popup menu.
And then go to Data Views page in Setting/Preferences dialog select check box Show values inline. | 1 | 1 | 0 | I've declared variable as a constant in the module beginning and some hundreds lines below I want to see it's value.
I know that it's possible to use ctrl and LMB to jump directly to declaration, but it's so distracting!
When I move mouse over variable's occurence with ctrl btn pressed I get only name and inferred type. I believe there is some way to see the value too. | How to see value of the constant hovering on it somewhere in the code? [Pycharm IDE] | -0.099668 | 0 | 0 | 1,005 |
45,498,188 | 2017-08-04T04:16:00.000 | 1 | 0 | 0 | 0 | python,sqlite,ipython | 45,498,306 | 2 | true | 0 | 0 | That is because .fetchall() makes your cursor(c) pointing the last row.
if you want to select your DB again, you should .execute again.
Or, if you just want to use your fetched data again, you can store c.fetchall() into your variable. | 1 | 0 | 0 | So I was trying learn sqlite and how to use it from Ipython notebook, and I have a sqlite object named db.
I am executing this command:
sel=" " " SELECT * FROM candidates;" " "
c=db.cursor().execute(sel)
and when I do this in the next cell:
c.fetchall()
it does print out all the rows but when I run this same command again i.e. I run
c.fetchall() again it doesn't print out anything, it just displays a two square brackets with nothing inside them. But when I run the above first command ie, c=db.cursor().execute(sel) and then run db.fetchall() it again prints out the table.
This is very weird and I don't understand it, what does this mean? | Weird behavior by db.cursor.execute() | 1.2 | 1 | 0 | 189 |
45,500,972 | 2017-08-04T07:43:00.000 | 0 | 0 | 0 | 0 | python,mysql,sql,django | 45,501,557 | 3 | false | 1 | 0 | @Daniel Roseman helped me understand the answer.
SOLVED:
What I was getting from the query was the model of character, so I couldn't have accessed it thru result.Character but thru result.Field_Inside_Of_Character | 1 | 0 | 0 | I am using django 1.10 and python 3.6.1
when executing
get_or_none(models.Character, pk=0), with SQL's get method, the query returns a hashmap i.e.: <Character: example>
How can I extract the value example?
I tried .values(), I tried iterating, I tried .Character
nothing seems to work, and I can't find a solution in the documentation.
Thank you, | Django SQL get query returns a hashmap, how to access the value? | 0 | 1 | 0 | 414 |
45,504,829 | 2017-08-04T10:45:00.000 | 0 | 0 | 0 | 0 | python,xml,django,jinja2 | 45,504,945 | 1 | false | 1 | 0 | This isn't anything to do with Python or Jinja2, but just down to how browsers render text within HTML.
If you want to preserve spacing and indentation, you need to wrap your content with <pre>...</pre> tags. | 1 | 0 | 0 | In my Django application, I call an external API, which returns XML. I would like to display this "minified" response as an indented multiline string on the page (a plus would be syntax highlighting). I tried to process the string in Python with toprettyxml() from xml.dom.minidom, and a few things with ElementTree, but it does not play along the Jinja2 rendering well (line breaks disappear and I only get a one line string, displayed inside <pre> tags).
What's the recommended way to display such code excerpt?
Should I use client-side rendering? Then, which library should I use?
Django version: 1.11.2
Python 3.6.1 | How to display indented xml inside a django webpage? | 0 | 0 | 1 | 425 |
45,507,805 | 2017-08-04T13:13:00.000 | 1 | 1 | 1 | 0 | python | 45,507,877 | 4 | false | 0 | 0 | I believe there are many ways around this, but here is what I would do:
Create a JSON config file with all the paths I need defined.
For even more portability, I'd have a default path where I look for this config file but also have a command line input to change it. | 1 | 3 | 0 | Working with scientific data, specifically climate data, I am constantly hard-coding paths to data directories in my Python code. Even if I were to write the most extensible code in the world, the hard-coded file paths prevent it from ever being truly portable. I also feel like having information about the file system of your machine coded in your programs could be security issue.
What solutions are out there for handling the configuration of paths in Python to avoid having to code them out explicitly? | Methods to avoid hard-coding file paths in Python | 0.049958 | 0 | 0 | 6,241 |
45,508,137 | 2017-08-04T13:28:00.000 | 0 | 0 | 0 | 1 | python,automation,fabric,devops | 45,508,534 | 1 | false | 0 | 0 | You can try to modify your command as follow:
mysql -uroot -p{your_password} -e 'SELECT * FROM dfs_va2.artikel_trigger;' > /Users/admin/Documents/dbdump/$(hostname)_dump.csv" download:"/Users/johnc/Documents/Imports/$(hostname)_dump.csv"
hostname returns current machine name so all your files should be unique (of course if machines have unique names)
Also you don't need to navigate to /bin/mysql every time, you can use simply mysql or absolute path /usr/local/mysql/bin/mysql | 1 | 0 | 0 | I have to SSH into 120 machines and make a dump of a table in databases and export this back on to my local machine every day, (same database structure for all 120 databases).
There isn't a field in the database that I can extract the name from to be able to identify which one it comes from, it's vital that it can be identified, as it's for data analysis.
I'm using the Python tool Fabric to automate the process and export the CSV on to my machine..
fab -u PAI -H 10.0.0.35,10.0.0.XX,10.0.0.0.XX,10.0.0.XX -z 1
cmdrun:"cd /usr/local/mysql/bin && ./mysql -u root -p -e 'SELECT *
FROM dfs_va2.artikel_trigger;' >
/Users/admin/Documents/dbdump/dump.csv"
download:"/Users/johnc/Documents/Imports/dump.csv"
Above is what I've got working so far but clearly, they'll all be named "dump.csv" is there any awesome people out there can give me a good idea on how to approach this? | Best way to automate file names of multiple databases | 0 | 1 | 0 | 62 |
45,510,704 | 2017-08-04T15:36:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,tkinter | 45,510,841 | 1 | true | 0 | 1 | If you rename the file to *filename*.pyw then the program will launch with no console. | 1 | 1 | 0 | I am working on GUI application with tkinter. When I click in the corner on Button [X] GUI application is closed, but console application is still running.
How to make when I closed GUI application in same time is closed console application.
I am trying with sys.exit() but this not worked. | tkinter gui is closed but console is not closed | 1.2 | 0 | 0 | 112 |
45,510,970 | 2017-08-04T15:51:00.000 | 0 | 0 | 0 | 0 | python,django,rest,api | 45,511,102 | 2 | false | 1 | 0 | Your best bet is to
a.) Perform the scraping in views themselves, and pass the info in a context dict to the template
or
b.) Write to a file and have your view pull info from the file. | 1 | 0 | 0 | Basically I have a program which scraps some data from a website, I need to either print it out to a django template or to REST API without using a database. How do I do this without a database? | Transfering data to REST API without a database django | 0 | 0 | 0 | 264 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.