Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
48,542,418
2018-01-31T12:37:00.000
0
0
0
0
python,import,packages
49,105,522
1
true
0
0
Just change this instruction: conda create -n tensorflow pip python=3.5 to: conda create -n tensorflow pip python=3.5 -spyder.
1
0
1
I have been working with Anaconda, Python 3.5 and Win 7. Having been obliged to work with TensorFlow, I installed and run it. For the first time it worked but for the second time, I got the error : "tensor flow module not found". I uninstalled and installed it again and got the same error. Switching to windows 10, python 2 or using another system have not been useful. Is there any other way to try (except using Linux)? Thanks a lot in advance.
Unsuccessfully Running TensorFlow Package on Windows and Python
1.2
0
0
39
48,542,904
2018-01-31T13:04:00.000
0
1
0
0
python,python-3.x,selenium
48,543,491
1
false
1
0
After reading through the Selenium code and playing in the interpreter, it appears there is no way to retrieve the current implicit_wait value. This is a great opportunity to add a wrapper to your framework. The wrapper should be used any time a user wants to change the implicit wait value. The wrapper would store the current value and provide a 'getter' to retrieve the current value. Otherwise, you can submit a request to the Selenium development team...
1
0
0
I have automation scripts where the implicitly_wait is parametrized so that the user will be able to set it. I have a default value of 20 seconds which I am aware of but there is a chance that the user has set it with a different value. In one of my methods I would like to change the implicitly_wait (to lower it as much as possible) and return it to the value before the method was called. In order to do so I would like to save the implicitly_wait value before I change it. This is why I am looking for a way to reach to it.
How can I view the implicitly_wait that the webdriver was set with?
0
0
1
31
48,543,257
2018-01-31T13:23:00.000
0
0
0
0
python,statistics
48,544,648
1
true
0
0
So after some research, here is my answer. Bootstrapping is for either 1D array or 2D (also referred to as pairs bootstrap) of any columns of data (only columns) while oversampling is for the minority class in the dataset (rows + columns). Bootstrapping is less popular for binary classification with 100+ features as it does not maintain the underlying relationship between the features while oversampling (depending on the technique) does maintain the underlying relationship between the features in the data. Moreover, Bootstrap works best for continuous data in a single column in case of 1D or 2d, as then you can produce a range of data for the whole column regardless of minority or majority class as if the measurements where taken n numbers of times (n is is any integer 1+) Thus, in this question scenario, oversampling is preferred as it maintains the underlying structure of the data of the minority class.
1
0
1
Oversampled data as in those created by the SMOTHE and ADASYN method. Simulated data as in those created by statistics, using extreme mean values, bootstrapping and measuring performance with the p-value. The application is to increase the minority class of underrepresented data using python.
What is the difference between oversampling minority data and simulating minority data?
1.2
0
0
39
48,545,255
2018-01-31T15:03:00.000
3
0
1
0
python,visual-studio-2015,static-libraries,libcmtd
49,984,841
1
true
0
0
This is what I needed to do to build and use python statically embedded in another application. To build the static python library (e.g., python36_d.lib, python36.lib) Convert ALL projects in the python solution (pcbuild.sln) to static. This is about 40 projects, so it may take awhile. This includes setting library products to be build as 'static lib', and setting all /MD and /MDd build options to /MT and /MTd. For at least the pythoncore project alter the Preprocess define to be Py_NO_ENABLE_SHARED. This tells the project it will be looking for calls from static libraries. By hook or crook, find yourself a pyconfig.h file and put it in the Include area of your Python build. It is unclear how this file is built from Windows tools, but one seems to be able to snag one from other sources and it works ok. One could probably grab the pyconfig.h from the Pre-compiled version of the code you are building. [By the way, the Python I built was 3.6.5 and was built with Windows 2015, update 3.] Hopefully, this should enable you to build both python36.lib and python36_d.lib. Now you need to make changes to your application project(s) to enable it to link with the python library. You need to do this: Add the Python Include directory to the General->Include Directories list. Add the Python Library directories to the General->Library Directories lists. This will be ..\PCBuild\win32 and ..\PCBuild\amd64. Add the define Py_NO_ENABLE_SHARED to the C/C++ -> Preprocessor area. For Linker->input add (for releases) python36.lib;shlwapi.lib;version.lib and (for debugs) python36_d.lib;shlwapi.lib;version.lib. And that should be it. It should run and work. But one more thing. In order to be able to function, the executable needs to access the Lib directory of the python build. So a copy of that needs to be moved to wherever the executable (containing the embedded python) resides. Or you can add the Lib area to the execution PATH for windows. That should work as well. That's about all of it.
1
2
0
I am working with the 3.6.4 source release of Python. I have no trouble building it with Visual Studio as a dynamic library (/MDd) I can link the Python .dll to my own code and verify its operation. But when I build it (and my code) with (/MTd) it soon runs off the rails when I try to open a file with a Python program. A Debug assertion fails in read.cpp ("Expression: _osfile(fh) & FOPEN"). What I believe is happening is the Python .dll is linking with improper system libraries. What I can't figure out is how to get it to link with the correct ones (static libraries).
I cannot build python.dll as a static library (/MTd) using Visual Studio
1.2
0
0
1,688
48,545,999
2018-01-31T15:42:00.000
-1
0
1
0
python,buffer,metasploit
48,546,198
3
false
0
0
They use Metasploit, msfvenom to be more specific, to create or generate shellcodes specially for crafted or exploited file such as documents (docs, ppt, xls, etc) with different encoding.
1
1
0
this is my first post on here so please excuse me if I have made any mistakes. So, I was browsing around on the Metasploit page, and I found these strange types of codes. I tried searching it on google and on here, but couldn't find any other questions and answers like I had. I also noticed that Elliot used the method in "Mr. Robot" while programming in Python. I can see that the code is usually used in viruses, but I need to know why. This is the code that I found using this method: buf += "\x5b\x4d\x6f\x76\x69\x65\x50\x6c\x61\x79\x5d\x0d\x0a\x46\x69\x6c\x65\x4e\x61\x6d\x65\x30\x3d\x43\x3a\x5c"
What does the "\x5b\x4d\x6f etc.." mean in Python?
-0.066568
0
0
1,314
48,547,573
2018-01-31T17:01:00.000
0
0
1
0
python,virtualenv,spyder
48,555,924
1
false
0
0
(Spyder maintainer here) You need to select a valid Python interpreter inside your virtualenv for that setting to take effect (e.g. /usr/bin/python3 on Linux). It seems you're not selecting a valid interpreter or you're merely selecting the directory of the virtualenv. That's why that setting is not taking effect.
1
0
0
I have tried using Tools-Preferences-Python Interpreter to point it to the local venv, but it gives me an error "you selected an invalid python interpreter". I am using spyder 3.1.4. The virtual environment is based on python3.
I can not connect spyder3 to a python virtual environment
0
0
0
354
48,549,453
2018-01-31T18:51:00.000
-1
1
0
0
python,r,twitter,tweepy,twython
48,580,106
2
false
0
0
Twitter api provides historical data for a hashtag only up to past 10 days. There is no limit on number of tweets but they have put limitation on time. There is no way to get historical data related to a hashtag past 10 days except: You have access to their premium api (Twitter has recently launched its premium api where you get access only if you qualify their criteria. Till date they have provided access to very limited users) You can purchase the data from data providers like Gnip You have internal contacts in Twitter ;)
1
1
0
I am trying to model the spread of information on Twitter, so I need the number of tweets with specific hashtags and the time each tweet was posted. If possible I would also like to restrict the time period for which I am searching. So if I were examining tweets with the hashtag #ABC, I would like to know that there were 1,400 tweets from 01/01/2015 - 01/08/2015, and then the specific time for each tweet. I don't the actual tweet itself though. From what I've read so far, it looks like the Twitter API restricts the total number of tweets you can pull and limits how far back I can search. Anyone know if there's a way for me to get this data?
How can I get the number of tweets associated with a certain hashtag, and the timestamp of those tweets?
-0.099668
0
1
1,686
48,550,201
2018-01-31T19:41:00.000
15
0
0
0
python,tensorflow,machine-learning,keras,artificial-intelligence
48,550,825
3
true
0
0
Yes, train_on_batch trains using a single batch only and once. While fit trains many batches for many epochs. (Each batch causes an update in weights). The idea of using train_on_batch is probably to do more things yourself between each batch.
2
12
1
I saw a sample of code (too big to paste here) where the author used model.train_on_batch(in, out) instead of model.fit(in, out). The official documentation of Keras says: Single gradient update over one batch of samples. But I don't get it. Is it the same as fit(), but instead of doing many feed-forward and backprop steps, it does it once? Or am I wrong?
What does train_on_batch() do in keras model?
1.2
0
0
13,182
48,550,201
2018-01-31T19:41:00.000
0
0
0
0
python,tensorflow,machine-learning,keras,artificial-intelligence
62,452,563
3
false
0
0
The method fit of the model train the model for one pass through the data you gave it, however because of the limitations in memory (especially GPU memory), we can't train on a big number of samples at once, so we need to divide this data into small piece called mini-batches (or just batchs). The methode fit of keras models will do this data dividing for you and pass through all the data you gave it. However, sometimes we need more complicated training procedure we want for example to randomly select new samples to put in the batch buffer each epoch (e.g. GAN training and Siamese CNNs training ...), in this cases we don't use the fancy an simple fit method but instead we use the train_on_batch method. To use this methode we generate a batch of inputs and a batch of outputs(labels) in each iteration and pass it to this method and it will train the model on the whole samples in the batch at once and gives us the loss and other metrics calculated with respect to the batch samples.
2
12
1
I saw a sample of code (too big to paste here) where the author used model.train_on_batch(in, out) instead of model.fit(in, out). The official documentation of Keras says: Single gradient update over one batch of samples. But I don't get it. Is it the same as fit(), but instead of doing many feed-forward and backprop steps, it does it once? Or am I wrong?
What does train_on_batch() do in keras model?
0
0
0
13,182
48,553,303
2018-01-31T23:47:00.000
2
0
0
0
python,python-3.x,pandas,csv
48,553,586
3
false
0
0
df.to_csv('output.tsv', sep='\t') Will separate the values with tabs instead of commas. .tsv is tab separated value file
1
3
1
I want to write a pandas dataframe to csv. One of the columns of the df has entries which are lists, e.g. [1, 2], [3, 4], ... When I use df.to_csv('output.csv') and I open the output csv file, the commas are gone. That is, the corresponding column of the csv has entries [1 2], [3 4], .... Is there a way to write a dataframe to csv without removing the commas in these lists? I've also tried using csv.writer.
Writing dataframe to csv WITHOUT removing commas
0.132549
0
0
7,416
48,553,336
2018-01-31T23:50:00.000
0
0
1
0
python,opencv,camera
48,553,481
1
false
0
0
Yes, 3 cameras should be sufficient to locate an object in 3D space. If we know the (X,Y,Z) coordinates of each camera and the angle to the object from each camera, we can construct 3 lines each going thru a camera and towards the object. All 3 will only cross in one place - which is at the object.
1
0
0
I am doing project on person re-identification within a single room with the concept of appearance model and person actual location. My setup is, I have 3 camera inside a single room(overlapping). These cameras detect person and calculate the appearance model for that person along with its exact location within the room in (x,y,z). My server in the cloud will maintain a feature list for each person in the room. I want to know if hree cameras shooting on different direction on one object will be able to locate that object in exact same location or not. Given, I have camera location in (xyz coordinate) and object 2d Coordinate in 2D plane. My concern if, with this all information in hand, if three camera calculate the person actual position relative to respective camera position, will all camera reports same location for that person. If I am wrong, could you please help me what should I aspect should I consider to make all camera report same exact location for an object.
Camera orientation and Object location identificaiton in 3D Coordinate system
0
0
0
182
48,554,394
2018-02-01T02:17:00.000
0
0
1
0
python,set,combinations,counter
48,554,518
2
false
0
0
Based on the edit you could make an object for each value. It could hold the number of times you have used the element and the element itself. When you find you have used an element three times, remove it from the list
1
0
0
I have the following situation: I am generating n combinations of size 3 from, made from n values. Each kth combination [0...n] is pulled from a pool of values, located in the kth index of a list of n sets. Each value can appear 3 times. So if I have 10 values, then I have a list of size 10. Each index holds a set of values 0-10. So, it seems to me that a good way to do this is to have something keeping count of all the available values from among all the sets. So, if a value is rare(lets say there is only 1 left), if I had a structure where I could look up the rarest value, and have the structure tell me which index it was located in, then it would make generating the possible combinations much easier. How could I do this? What about one structure to keep count of elements, and a dictionary to keep track of list indices that contain the value? edit: I guess I should put in that a specific problem I am looking to solve here, is how to update the set for every index of the list (or whatever other structures i end up using), so that when I use a value 3 times, it is made unavailable for every other combination. Thank you. Another edit It seems that this may be a little too abstract to be asking for solutions when it's hard to understand what I am even asking for. I will come back with some code soon, please check back in 1.5-2 hours if you are interested.
Keeping count of values available from among multiple sets
0
0
0
37
48,555,860
2018-02-01T05:22:00.000
0
0
0
0
python,apache-spark,pyspark,spark-dataframe
50,587,505
2
false
0
0
In order to see whether there is any impact, you can try two things: Write the data back to the file system. Once with the original type and anther time with your optimisation. Compare size on disk. Try calling collect on the dataframe and look at the driver memory in your OS's system monitor, make sure to induce a garbage collection to get a cleaner indication. Again- do this once w/o the optimisation and another time with the optimisation. user8371915 is right in the general case but take into account that the optimisations may or may not kick in based on various parameters like row group size and dictionary encoding threshold. This means that even if you do see impact, there is a good chance you could get the same compression by tuning spark.
2
5
1
I'm reading in a large amount of data via parquet files into dataframes. I noticed a vast amount of the columns either have 1,0,-1 as values and thus could be converted from Ints to Byte types to save memory. I wrote a function to do just that and return a new dataframe with the values casted as bytes, however when looking at the memory of the dataframe in the UI, I see it saved as just a transformation from the original dataframe and not as a new dataframe itself, thus taking the same amount of memory. I'm rather new to Spark and may not fully understand the internals, so how would I go about initially setting those columns to be of ByteType?
PySpark casting IntegerTypes to ByteType for optimization
0
0
0
339
48,555,860
2018-02-01T05:22:00.000
0
0
0
0
python,apache-spark,pyspark,spark-dataframe
48,645,854
2
false
0
0
TL;DR It might be useful, but in practice impact might be much smaller than you think. As you noticed: the memory of the dataframe in the UI, I see it saved as just a transformation from the original dataframe and not as a new dataframe itself, thus taking the same amount of memory. For storage, Spark uses in-memory columnar storage, which applies a number of optimizations, including compression. If data has low cardinality, then column can be easily compressed using run length encoding or dictionary encoding, and casting won't make any difference.
2
5
1
I'm reading in a large amount of data via parquet files into dataframes. I noticed a vast amount of the columns either have 1,0,-1 as values and thus could be converted from Ints to Byte types to save memory. I wrote a function to do just that and return a new dataframe with the values casted as bytes, however when looking at the memory of the dataframe in the UI, I see it saved as just a transformation from the original dataframe and not as a new dataframe itself, thus taking the same amount of memory. I'm rather new to Spark and may not fully understand the internals, so how would I go about initially setting those columns to be of ByteType?
PySpark casting IntegerTypes to ByteType for optimization
0
0
0
339
48,558,085
2018-02-01T08:13:00.000
4
0
0
0
python,machine-learning,keras,time-series
48,558,428
1
true
0
0
I think you are using batch processing model(you didn't using any real-time processing frameworks and tools) so there shouldn't be any problem when making your model or classification. The problem may occur a while after you make model, so after that time your predicted model is not valid. I suggest some ways maybe solve this problem: Use Real-time or near real-time processing(like apache spark, flink, storm, etc). Use some conditions to check your data periodically for any changes if any change happens run your model again. Delete instances that you think they may cause problem(may that changed data known as anomaly itself) but before make sure that data is not so important. Change you algorithm and use algorithms that not very sensitive to changes.
1
2
1
I have some problem when detecting anomaly from time series data. I use LSTM model to predict value of next time as y_pred, true value at next time of data is y_real, so I have er = |y_pred - y_t|, I use er to compare with threshold = alpha * std and get anomaly data point. But sometime, our data is effected by admin or user for example number of player of a game on Sunday will higher than Monday. So, should I use another model to classify anomaly data point or use "If else" to classify it?
Real-time anomaly detection from time series data
1.2
0
0
1,811
48,558,151
2018-02-01T08:18:00.000
7
0
0
0
user-interface,automation,sl4a,pyautogui,qpython3
48,928,935
2
false
0
1
Something like PyAutoGui for Android is AutoInput. It's a part of the main app Tasker. If you've heard about Tasker, then you know what I'm taking about. If not, them Tasker is an app built to automate your Android Device. If you need specific taps and scrolls, then you can install the plug-in Auto Input for Tasker. It's precisely allows you to tap a point in your screen. Or, if you have a rooted device, then you can directly run shell commands from Tasker without the need of any Plug-in. Now, you can get the precise location of your x,y coordinate easily. Go to settings, About Phone, and Tap on Build Number until it says, "You've unlocked developer options." Now, in the settings app, you will have a new developer options. Click on that, then scroll down till you find "Show Pointer Location" and turn that on. Now wherever you tap and hold the screen, the top part of your screen will give you the x,y coordinate of that spot. I hope this helps. Please comment if you have any queries.
2
3
0
I'm currently using Qpython and Sl4A to run python scripts on my Droid device. Does anybody know of a way to use something like PyAutoGUI on mobile to automate tapping sequences (which would be mouse clicks on a desktop or laptop)? I feel like it wouldn't be too hard, but I'm not quite sure how to get the coordinates for positions on the mobile device.
Way to use something like PyAutoGUI on Mobile?
1
0
0
8,687
48,558,151
2018-02-01T08:18:00.000
2
0
0
0
user-interface,automation,sl4a,pyautogui,qpython3
66,658,332
2
false
0
1
Unfortunately no. PyAutoGUI only runs on Windows, macOS, and Linux
2
3
0
I'm currently using Qpython and Sl4A to run python scripts on my Droid device. Does anybody know of a way to use something like PyAutoGUI on mobile to automate tapping sequences (which would be mouse clicks on a desktop or laptop)? I feel like it wouldn't be too hard, but I'm not quite sure how to get the coordinates for positions on the mobile device.
Way to use something like PyAutoGUI on Mobile?
0.197375
0
0
8,687
48,558,440
2018-02-01T08:37:00.000
0
0
0
0
python-2.7,android-ndk
48,559,783
1
false
0
1
nl_langinfo() was introduced to Android at API 26. If your board runs an older version of Android, you should recompile python, setting the ANDROID_API to match your version.
1
2
0
I am facing issue to get python working on Android Board. I cross compiled python 2.7.14 successfully and copied binaries and lib onto the board and getting following error. CANNOT LINK EXECUTABLE "python2.7": cannot locate symbol "nl_langinfo" referenced by "/system/bin/python2.7"... Aborted I used ndk version16b for cross compilation. any help and idea shall be greatly appreciated.
Android NDK Cross compilation issue in python2.7
0
0
0
352
48,559,911
2018-02-01T09:58:00.000
0
0
0
0
django,python-social-auth
48,586,214
1
true
1
0
There's no standard way to do it in python-social-auth, there are a few alternatives: Override the login page and if there's a user authenticated, then log them out first, or show an error, whatever fits your projects. Add a pipeline function and set it in the top that will act if user is not None, you can raise an error, logout the user, etc. Override the backend and extend the auth_allowed method in it return False if there's a valid user instance at self.strategy.request.user. This will halt the auth flow and AuthForbidden will be raised.
1
0
0
I'm using python-social-auth to allow users to login via SAML; everything's working correctly, except for the fact that if a logged-in user opens the SAML login page and logs in again as a different user, they'll get an association with both of the SAML users, rather than switch login. I understand the purpose behind this (since it's what you can normally do to associate the user with different auth services) but in this case I need to enforce a single association (ie. if you're logged in with a given SAML IdP, you cannot add another association for the same user with the same provider). Is there any python-social-auth solution for this, or should I cobble together something (for instance, preventing logged-in users from accessing the login page)?
Python-social-auth: do not reassociate existing users
1.2
0
1
187
48,560,250
2018-02-01T10:16:00.000
0
0
0
0
python,django
48,562,022
2
false
1
0
For those who may step on the same rakes: What I ended up doing is passing the "app_name" variable from the error page views to the context. Thus the variable is in the context for non-error pages, because it is set by the context processor, and it is there for error pages because it is passed manually. Doesn't look like a good solution though.
1
0
0
In my Django project, there are several django apps. I want to write custom error pages, and I want them to contain correct links to the application that the errors happened in, for example, if a 500-error happened in my app a, I want the error page contain a link to /a/index.html, and if a server error happened in app b, I want the page to contain the link to /b/index.html. And I want to create only one copy of each of the error page files, which means I need to get the name of the app from within the template. To that end, I have written a custom context processor that adds the app_name var to the templates. I tested it on my normal pages, but when I went on to test it on the error pages, turns out that the context processor isn't firing. Similarly, I have written a template tag app_aware_url which takes the name of the url pattern and tries to resolve it, but again, turns out that for the error pages the simple_tag(takes_context=True) receives a context that does not contain the request (which is needed for telling which app I am in). Is there a way round it, or is there a better solution to my problem altogether? (Django is 1.11)
Django: Context processors for error pages (or request in simple_tag)
0
0
0
713
48,561,974
2018-02-01T11:45:00.000
2
0
0
0
python,sparse-matrix,orthogonal
48,800,308
1
true
0
0
As a preliminary thought you could partition the matrix into diagonal blocks, fill those blocks with QR and then permute rows/columns. The resulting matrices will remain orthagonal. Alternatively, you could define some sparsity pattern for Q and try to minimize f(Q, xi) subject to QQ^T=I where f is some (preferably) convex function that adds entropy through the random variable xi. Can't say anything about the efficacy of either method since I haven't actually tried them. EDIT: A bit more about the second method. f can really be any function. One choice might be similarity of the non-zero elements to a random gaussian vector (or any other random variate): f = ||vec(Q) - x||_2^2, x ~ N(0, sigma * I). You could handle this using any general constrained optimizer. The problem of course, is that not every pattern S is guaranteed to have a (full rank) orthogonal filling. If you have the memory, L1 regularization (or a smooth approximation) could encourage sparsity in a dense matrix variable: g(Q) = f(Q) + P(Q) where P is any sparsity-inducing penalty function. Check out Wen & Yen (2010) "A feasible Method for Optimization with Orthogonality Constraints" for an algorithm specifically designed for optimization of general (differentiable) functions over (dense) orthogonal matrices and Liu, Wu, So (2015) "Quadratic Optimization with Orthogonality Constraints" for more theorical evaluation of several line/arc search algorithms for quadratic functions. If memory is a problem, you could generate each row/column separately using sparse basis pursuit, for which there are many algorithms depending on the nature of your problem. See Qu, Sun and Wright (2015) "Finding a sparse vector in a subspace: linear sparsity using alternate directions" and Bian et al (2015) "Sparse null space basis pursuit and analysis dictionary learning for high-dimensional data analysis" for algorithm details, though in both cases you will have to incorporate/replace constraints to promote orthogonality to all previous vectors. It's also worth noting there are sparse QR algorithms that return Q as the product of sparse/structured matrices. If you are concerned about storage space alone, this might be the simplest method to create large, efficient orthogonal operators.
1
2
1
How can one generate random sparse orthogonal matrix? I know there is a sparse matrices in scipy library but they are generally non-orthogonal. One can exploit QR-factorization, but it is not necessarily preserves sparsity.
How to generate sparse orthogonal matrix in python?
1.2
0
0
989
48,563,680
2018-02-01T13:15:00.000
0
0
1
0
python,raspberry-pi,virtualenv,sensors,adafruit
48,569,214
1
false
0
0
I'm sure you can and should use --site-packages. It doesn't do what you seem to think it does — it doesn't make pip install all packages globally. It makes python in the virtual env access global site-packages but pip still install packages in the virtual env (after you activate it with . env/bin/activate).
1
0
0
I was trying to install the Adafruit_Python_MPR121 Library in my venv folder, but it always installed it into the global dist-packages and I cannot access them from my venv. I cannot use --site-packages because I need some local packages in the env. Does someone know a solution for it?
Install Adafruit_MPR121 Library in virtual Environment
0
0
0
134
48,564,750
2018-02-01T14:11:00.000
0
0
1
0
python-2.7,pandas,package,pycharm
48,564,822
1
false
0
0
Go on File->settings-> open your project and go on project interpreter. There select your Project interpreter (Python version) and apply. This should solve your problem. If you dont have pandas package in your in your interpreter, then you can add it from there.
1
0
0
pyCharm 2017.3 I use the package 'pandas', and add the package in 'Settings' When I create new project, there is no pandas package so I have to re-add(install?) the pandas package.. how can I set the pandas package as a default.
python, default package (pandas)
0
0
0
64
48,566,024
2018-02-01T15:12:00.000
0
0
0
1
python-2.7,fedora,fedora-27
48,856,425
1
false
0
0
Probably you're missing the python-devel package: sudo dnf install python-devel -y It should be written in the last lines before the last error you've posted.
1
1
0
I am using fedora 27 and python 2.7 and I installed the last version of GCC but when I tried to install mysqlpython connector by typing 'pip install mysqlpython' it show me an error. command 'gcc' failed with exit status 1
How to install Mysqlpython connector on fedora 27
0
1
0
64
48,566,080
2018-02-01T15:15:00.000
0
0
0
1
python,windows,unix,python-idle
48,574,716
1
false
0
0
IDLE is an interface for running Python code. You can only enter Python statements in the shell. Python statements do not begin with '%'. I believe IPython uses '%' as an escape to run system commands. If so, your question would make more sense if 'IDLE' were replaced by 'IPython'. I believe the answer would still be 'No', but I don't know for sure. Try searching for IPython docs. Windows 10 now has 'Ubuntu on Windows' (or something like that) for running linux shell commands on Windows. I do not know more than this. Again, do a web search. If you install Git for Windows Git, a bash shell, compiled with mingw64, is included. I have used it a tiny bit. I don't know which unix utilities are included, or translated into Windows equivalents. I also don't know if it is available separately.
1
0
0
I was wondering if I can utilize UNIX commands on Python's IDLE program. I would just like to run very simple commands like: % wc foo.txt or % file foo.txt If I cannot do this in IDLE, is there any other way to run these commands through Windows? Thanks.
Getting UNIX commands on Python IDLE
0
0
0
42
48,567,012
2018-02-01T16:05:00.000
5
0
0
0
python,machine-learning,scikit-learn,keras
48,573,176
2
true
0
0
Metrics in Keras and in Sklearn mean different things. In Keras metrics are almost same as loss. They get called during training at the end of each batch and each epoch for reporting and logging purposes. Example use is having the loss 'mse' but you still would like to see 'mae'. In this case you can add 'mae' as a metrics to the model. In Sklearn metric functions are applied on predictions as per the definition "The metrics module implements functions assessing prediction error for specific purposes". While there's an overlap, the statistical functions of Sklearn doesn't fit to the definition of metrics in Keras. Sklearn metrics can return float, array, 2D array with both dimensions greater than 1. There is no such object in Keras by the predict method. Answer to your question: It depends where you want to trigger: End of each batch or each epoch You can write a custom callback that is fired at the end of batch. After prediction This seems to be easier. Let Keras predict on the entire dataset, capture the result and then feed the y_true and y_pred arrays to the respective Sklearn metric.
1
2
1
Tried googling up, but could not find how to implement Sklearn metrics like cohen kappa, roc, f1score in keras as a metric for imbalanced data. How to implement Sklearn Metric in Keras as Metric?
How to implement Sklearn Metric in Keras as Metric?
1.2
0
0
2,808
48,567,245
2018-02-01T16:17:00.000
25
0
1
1
python,pipenv
48,696,622
5
true
0
0
You have few options there. You can run your gunicorn via pipenv run: pipenv run gunicorn module:app This creates a slight overhead, but has the advantage of also loading environment from $PROJECT_DIR/.env (or other $PIPENV_DOTENV_LOCATION). You can set the PIPENV_VENV_IN_PROJECT environment variable. This will keep pipenv's virtualenv in $PROJECT_DIR/.venv instead of the global location. You can use an existing virtualenv and run pipenv from it. Pipenv will not attempt to create its own virtualenv if it's run from one. You can just use the weird pipenv-created virtualenv path.
1
24
0
I am thinking about switching from pip & virtualenv to pipenv. But after studying the documentation I am still at a loss on how the creators of pipenv structured the deployment workflow. For example, in development I have a Pipfile & a Pipfile.lock that define the environment. Using a deployment script I want to deploy git pull via Github to production server pipenv install creates/refreshes the environment in the home directory of the deployment user But I need a venv in a specific directory which is already configured in systemd or supervisor. E.g.: command=/home/ubuntu/production/application_xy/env/bin/gunicorn module:app pipenv creates the env in some location such as /home/ultimo/.local/share/virtualenvs/application_xy-jvrv1OSi What is the intended workflow to deploy an application with pipenv?
pipenv: deployment workflow
1.2
0
0
16,432
48,568,090
2018-02-01T17:03:00.000
1
0
1
1
python,git,anaconda,theano
61,508,765
2
false
0
0
First run the command conda install -c anaconda git then run git clone https://github.com/Theano/Theano.git
1
0
0
I have installed Git on my system and the git command is working fine in the command prompt. I also updated my path varaibles to reflect the address of git bin and cmd. Now, I want to use the following command to install Theano library using Anaconda Prompt - pip install git+git://github.com/Theano/Theano.git The error I am getting is - Error [WinError 2] The system cannot find the file specified while executing command git clone -q git://github.com/fchollet/keras.git C:\Users\Kritika\AppData\Local\Temp\pip-vp5qj204-build Cannot find command 'git' I googled and found a lot of things but nothing seems to work for me. Can anybody help me here?
Git not recognised on Anaconda Prompt
0.099668
0
0
8,057
48,568,959
2018-02-01T17:55:00.000
1
0
0
0
python,django,python-3.x,django-models,django-views
48,572,156
1
true
1
0
You can think of API calls as views that strictly return data.
1
0
0
I ma trying to wrap my head round the Django Framework and mapping it out on an architecture diagram. I am struggling to understand where the API calls are place (Model or View) and would like some guidance on this.
Which Layer does API access sit in?
1.2
0
0
17
48,571,060
2018-02-01T20:18:00.000
-1
0
1
0
python,xml,xml-parsing
48,571,357
3
true
0
0
You could load the file do the replacement, e.g. string_containing_modified_data = data_as_string.replace('\\>', '/>') use etree.fromstring(string_containing_modified_data) to parse the xml. If possible, you should try to fix the writer, but I understand if you don't have the opportunity to do so.
2
0
0
Good evening, i have to work on a xml file, the problem is that the elements in the file ends with a different format than usual, for example: <1ELEMENT> text <\1ELEMENT> I use the function root=etree.parse('filepath'), and, by changing manually in the text out of the compiler the \ in /, the function works correctly. The big problem is that i need to automate that replacing process and the only solution that i thinked about is importing the file as array make a replace / to \ and build a new xml file;but it seems a little bit clunky. Summing up i need to know if exist a function to replace the terms i mentioned above before to use the root=etree.parse('filepath').
(python) parsing xml file but the elements ends with \
1.2
0
1
353
48,571,060
2018-02-01T20:18:00.000
0
0
1
0
python,xml,xml-parsing
48,572,589
3
false
0
0
This isn't an XML file. Given that the format of the file is garbage, are you sure the content isn't garbage too? I wouldn't want to work with data from such an untrustworthy source. If you want to parse this data you will need to work out what rules it follows. If those rules are something fairly similar to XML rules then it might be that converting it to XML and then parsing the XML is a reasonable way to go about this; if not, you might be better off writing a parser from scratch. But before you do so, try to persuade the people responsible for this nonsense of the benefits of conforming to standards.
2
0
0
Good evening, i have to work on a xml file, the problem is that the elements in the file ends with a different format than usual, for example: <1ELEMENT> text <\1ELEMENT> I use the function root=etree.parse('filepath'), and, by changing manually in the text out of the compiler the \ in /, the function works correctly. The big problem is that i need to automate that replacing process and the only solution that i thinked about is importing the file as array make a replace / to \ and build a new xml file;but it seems a little bit clunky. Summing up i need to know if exist a function to replace the terms i mentioned above before to use the root=etree.parse('filepath').
(python) parsing xml file but the elements ends with \
0
0
1
353
48,572,082
2018-02-01T21:33:00.000
1
0
0
1
python,django
48,572,125
1
false
1
0
celery can work just fine locally. Distributed is just someone else's computer after all :) You will have to install all the same requirements and the like. You can kick off workers by hand, or as a service, just like in the celery docs.
1
1
0
I have a web application that runs locally only (does not run on a remote server). The web application is basically just a ui to adjust settings and see some information about the main application. Web UI was used as opposed to a native application due to portability and ease of development. Now, in order to start and stop the main application, I want to achieve this through a button in the web application. However, I couldn't find a suitable way to start a asynchronous and managed task locally. I saw there is a library called celery, however that seems to be suitable to a distributed environment, which mine is not. My main need to be able to start/stop the task, as well as the check if the task is running (so I can display that in the ui). Is there any way to achieve this?
Django asynchronous tasks locally
0.197375
0
0
631
48,572,258
2018-02-01T21:46:00.000
0
0
1
0
python,mongodb,ssl,eve
48,727,022
1
true
0
0
Resolved by passing required parameters (ssl, ssl_ca_certs, etc) to MongoClient via MONGO_OPTIONS setting.
1
0
0
MongoDB uses self-signed certificate. I want to setup service on EVE to work with it. I searched documentation and SO but found only information how to use self-signed cert to access EVE itself. What should I do to connect to MongoDB from EVE with self-signed certificate?
Connect to MongoDB with self-signed certificate from EVE
1.2
1
0
228
48,572,999
2018-02-01T22:47:00.000
1
0
0
0
python,neural-network
48,574,103
1
true
0
0
The shape of A tells me that A is most likely an array of 60 grayscale images (batch size 60), with each image having a size of 128x128 pixels. We have: B = np.array([A[..., n:n+5] for n in (5*4, 5*5)]). To better understand what's happening here, let's unpack this line in reverse: for n in (5*4, 5*5): This is the same as for n in (20, 25). The author probably chose to write it in this way for some intuitive reason related to the data or the rest of the code. This gives us n=20 and n=25. A[..., n:n+5]: This is the same as A[:, :, n:n+5]. This gives us all the rows from all the images of in A, but only the 5 columns at n:n+5. The shape of the resulting array is then (60, 128, 5). n=20 gives us A[:, :, 20:25] and n=25 gives us A[:, :, 25:30]. Each of these arrays is therefore of size (60, 128, 5). Together, [A[..., n:n+5] for n in (5*4, 5*5)] gives us a list (thanks list comprehension!) with two elements, each a numpy array of size (60, 128, 5). np.array() converts this list into a numpy array of shape (2, 60, 128, 5). The result is that B contains 2 patches of each image, each a 5 pixel column wide subset of the original image- one starting at column 20 and the second one starting at column 25. I can't speculate to the reason for this crop without further information about the network and its purpose. Hope this helps!
1
0
1
I am new to Python. I am confused as to what is happening with the following: B = np.array([A[..., n:n+5] for n in (5*4, 5*5)]) Where A.shape = (60L, 128L, 128L) and B.shape = (2L, 60L, 128L, 5L) I believe it is supposed to make some sort of image patch. Can someone explain to me what this does? This example is in the context of applying neural networks to images.
Python np.array example
1.2
0
0
80
48,573,443
2018-02-01T23:30:00.000
-1
0
0
1
python,bash,pycharm,sh
48,573,531
1
false
0
0
You should be able to go to Run->Edit Configurations and then add configurations, including an Interpreter path where you can set the path to your virtualenv bash executable
1
0
0
So I'm not sure how to word this correctly but I have a .sh file that I use to turn on my python app and it runs fine when I run this command ./start_dev.sh in my terminal. But I am having trouble trying to have it run in pycharm because my app is run in Ubuntu and I don't know how to direct the interpreter path to my Ubuntu virtualenv bash command. Is this even possible?
Unable to get pycharm to run my .sh file
-0.197375
0
0
1,297
48,573,989
2018-02-02T00:32:00.000
1
0
1
0
python,python-2.7,jupyter-notebook,exe
48,626,144
1
false
0
0
Try exporting your .ipynb file to a normal .py file and running pyinstaller on the .py file instead. You can export as .py by going to File > Download As > Python (.py) in the Jupyter Notebook interface
1
0
0
I am using jupyter notebook for coding, hence my file format is ipynb. I would like to turn this piece of code into an executable file .exe for later uses. So far I have managed to get the exe file by going to anaconda prompt, executed the following command ---> pyinstaller --name ‘name of the exe’ python_code.ipynb This gives me two folders build and dist, both contains .exe file. However, none of them worked. I would like to know why and how to fix it. by double click on the exe, it shows a black cmd pop up and then it went away. nothing else happens.
ipynb python file to executable exe file
0.197375
0
0
7,843
48,574,261
2018-02-02T01:09:00.000
2
0
1
0
python,floating-point
48,574,310
2
false
0
0
There are only so many bits of precision in a floating-point number. For Python’s float (also known as “double”), that number is 53. The bit required to store the 1 in your example is just off the end, so it gets dropped. (It’s actually rounding, so sometimes it rounds up: 2.**53+3==2.**53+4.)
2
2
0
For example, 2.0**53 == 2.0**53 + 1 returns True. I understand it's something related to floating numbers representation, but can someone explain?
Unintuitive equalities between floating numbers
0.197375
0
0
64
48,574,261
2018-02-02T01:09:00.000
1
0
1
0
python,floating-point
48,574,328
2
false
0
0
A simple float does not have enough precision to represent a difference of 1 between two numbers of that magnitude. You can test the limits yourself, if you want: write a loop to test 2.0**N == 2.0**N + 1 for values of N in range(16, 55). You'll find out just how many bits of precision are in a float. For an amusing result, also try adding 1 to 2.0**53 a gazillion times. You'll see how the value doesn't change with those small increments.
2
2
0
For example, 2.0**53 == 2.0**53 + 1 returns True. I understand it's something related to floating numbers representation, but can someone explain?
Unintuitive equalities between floating numbers
0.099668
0
0
64
48,577,599
2018-02-02T07:19:00.000
-1
0
0
0
python,facebook-graph-api,scrape
48,578,008
2
false
1
0
In tags I see facebook-graph-api, which has limitations. Why don't you use requrests + lxml? It would be such easier, and as you want to scrape public pages, you don't even have to login, so it could be easily solve.
1
0
0
I'm trying to scrape Facebook public page likes data using Python. My scraper uses the post number in order to scrape the likes data. However, some posts have more than 6000 likes and I can only scrape 6000 likes, also I have been told that this is due to Facebook restriction which doesn't allow to scrape more than 6000 per day. How can I continue scrape the likes for the post from the point the scraper stop scraping.
scrape facebook likes with python
-0.099668
0
1
1,695
48,580,762
2018-02-02T10:32:00.000
0
0
0
0
python,machine-learning,naivebayes
68,209,345
2
false
0
0
We use algorithm based on the kind of dataset we have.Bernoulli Naive bayes is good at handling boolean/binary attributes,while Multinomial Naive bayes is good at handling discrete values and Gaussian naive bayes is good at handling continuous values. Consider three scenarios 1)consider a datset which has columns like has_diabetes,has_bp,has_thyroid and then you classify the person as healthy or not.In such a scenario,Bernoulli NB will work well. 2)consider a dataset that has marks of various students of various subjects and you want to predict,whether the student is clever or not.Then in this case multinomial NB will work fine. 3)consider a dataset that has weight of students and you are predicting height of them,then GaussiaNB will well in this case.
1
1
1
Sorry for some grammatical mistakes and misuse of words. I am currently working with text classification, trying to classify the email. After my research, i found out Multinomial Naive Bayes and Bernoulli Naive Bayes is more often used for text classification. Bernoulli just cares about whether the word happens or not. Multinomial cares about the number of occurrence of the word. For Gaussian Naive Bayes, it's usually been used for continuous data and data with normal distribution, eg: height,weight But what is the reason that we don't use Gaussian Naive Bayes for text classification? Any bad things will happen if we apply it to text classification?
Difference of three Naive Bayes classifiers
0
0
0
1,887
48,581,973
2018-02-02T11:41:00.000
1
0
1
1
python,ubuntu,anaconda,spyder,python-idle
48,586,985
2
false
0
0
(Spyder maintainer here) The options to configure the current working directory are present in Tools > Preferences > Current working directory. We don't have a graphical way to sync that with the directory of the terminal from which Spyder was started, but there's a command line option to do it, as @Bartłomiej mentioned.
1
0
0
I'm using Spyder 3 bundled with Anaconda Python on Ubuntu. Usually with other Python IDEs (like IDLE,...) when i want to work inside a specific folder, I just go to that folder then open a terminal, type "idle" and the IDE pops up, then I can work inside that folder. But with Spyder when i open a terminal, type "spyder" , the Spyder IDE still works at the default directory! How can I change this? Please help me, thank you very much
Spyder 3 (Anaconda) on Ubuntu- How to change working directory to that of terminal
0.099668
0
0
1,354
48,583,906
2018-02-02T13:38:00.000
1
1
0
0
python,parsing,outlook,pst
52,829,980
1
false
0
0
All messages are grouped into folders (default and user-defined) in the pst file. Default folders like Inbox, Outbox, Sent Items, Deleted Items, Drafts, Tasks, Contacts can be used to differentiate pypff message objects.
1
0
0
I'm trying to write a Python-script for parsing .pst-files from Outlook using pypff. I've successfully extracted all the information I need, except from the message type. This is important, as I want to be able to distinguish between ordinary e-mails, meeting invitations and other items in the file. Distinguishing between object types seems to be possible in the libpff (ITEM_TYPE), but this functionality does not seem to be implemented in pypff. Does anybody have an idea how to extract this information from a .pst-file, either using pypff or some other handy Python package?
Identifying message type when parsing Outlook .pst file using pypff
0.197375
0
0
792
48,584,712
2018-02-02T14:23:00.000
0
0
0
1
python,multithreading,asynchronous
48,588,092
2
false
1
0
Celery will be your best bet - it's exactly what it's for. If you have a need to introduce dependencies, it's not a bad thing to have dependencies. Just as long as you don't have unneeded dependencies. Depending on your architecture, though, more advanced and locked-in solutions might be available. You could, if you're using AWS, launch an AWS Lambda function by firing off an AWS SNS notification, and have that handle what it needs to do. The sky is the limit.
1
0
0
I have seen a few variants of my question but not quite exactly what I am looking for, hence opening a new question. I have a Flask/Gunicorn app that for each request inserts some data in a store and, consequently, kicks off an indexing job. The indexing is 2-4 times longer than the main data write and I would like to do that asynchronously to reduce the response latency. The overall request lifespan is 100-150ms for a large request body. I have thought about a few ways to do this, that is as resource-efficient as possible: Use Celery. This seems the most obvious way to do it, but I don't want to introduce a large library and most of all, a dependency on Redis or other system packages. Use subprocess.Popen. This may be a good route but my bottleneck is I/O, so threads could be more efficient. Using threads? I am not sure how and if that can be done. All I know is how to launch multiple processes concurrently with ThreadPoolExecutor, but I only need to spawn one additional task, and return immediately without waiting for the results. asyncio? This too I am not sure how to apply to my situation. asyncio has always a blocking call. Launching data write and indexing concurrently: not doable. I have to wait for a response from the data write to launch indexing. Any suggestions are welcome! Thanks.
Flask: spawning a single async sub-task within a request
0
0
0
500
48,589,399
2018-02-02T19:11:00.000
1
0
1
0
python,ggplot2,pip,conda
48,620,666
1
false
0
0
Try making a fresh conda environment and only request the installation of ggplot: conda create --name my_new_env -c conda-forge ggplot This installed ggplot v0.11.5 and python v3.6.3 for me just now (Ubuntu 17.10)
1
1
1
I am transitioning from R and would like to use ggplot in the yhat Rodeo IDE. I have tried installing ggplot for python 3.6.3 (Anaconda 5.0.1 64-bit) in a number of ways. When trying to install any kind of ggplot via conda i get this: "UnsatisfiableError: The following specifications were found to be in conflict: -ggplot -zict" I am aware that ggplot does not work with the most recent version of python, so i tried installing an older version and received the same error. I then thought to try pip, and ensured that pip was installed and up to date via conda. In rodeo when i try to use "! pip install ggplot" i get the "pip.exe has stopped working" error. Same for easy_install, etc. Any ideas? It seems that i can't install it in any way.
Installing ggplot in python 3.6.3
0.197375
0
0
2,135
48,590,065
2018-02-02T20:00:00.000
0
0
1
1
python
60,030,851
2
false
0
0
Python installations can be very difficult to remove on Windows. If you delete some of the obvious content, the Python Launcher for Windows may still run, but it won't be able to complete the uninstall successfully. I installed the free [IObit Uninstaller], searched for all programs with py in the name, and was finally able to remove my old versions.
2
0
0
as Python 3 is out I'd like to completely uninstall 2.7 on windows 10 64 bit. How can I achieve this?
How to completely uninstall Python 2.7 on Windows 10?
0
0
0
7,609
48,590,065
2018-02-02T20:00:00.000
0
0
1
1
python
54,958,477
2
false
0
0
Removing paths and using the installer to uninstall solves the problem.
2
0
0
as Python 3 is out I'd like to completely uninstall 2.7 on windows 10 64 bit. How can I achieve this?
How to completely uninstall Python 2.7 on Windows 10?
0
0
0
7,609
48,593,014
2018-02-03T01:18:00.000
2
0
0
0
python,tensorflow,tensorboard
48,633,697
1
false
0
0
For those with the same problem I was able to fix it through the following: 1) find where your tensorflow lives pip show tensorflow and look at the location line, copy it. 2) For me it was cd /usr/local/lib/python2.7/site-packages/ 3) cd tensorflow/python/lib 4) open tf_should_use.py 5) In your python editor replace line 28 from backports import weakref with import weakref and save the file.
1
0
1
With python 2.7 on a mac with tensorflow and running tensorboard --logdir= directory/wheremylog/fileis produces the following error ImportError: cannot import name weakref I've seen several folks remedy the issue with pip install backports.weakref but that requirement is already satisfied for me. Requirement already satisfied: backports.weakref in /usr/local/lib/python2.7/site-packages I am out of ideas and am really keen to get tensorboard working. Thanks
Launching tensorboard error - ImportError: cannot import name weakref
0.379949
0
0
917
48,593,596
2018-02-03T03:18:00.000
0
0
0
0
android,python,virtual-machine
48,593,697
1
false
0
1
I am not familiar with Kivy, but you can create and launch a virtual device from android studio or the command line once you have the android sdk installed. If Kivy builds and APK then you can deploy that apk to the virtual device using ADB or by dragging the APK into the virtual device window.
1
0
0
Is there an environment that supports virtual Android devices for Python app testing? I'm using Kivy to develop a simple Python app and cannot find an environment to test application deployments that run/handle Python. The Android Studio is designed to handle Java but provides the functionality I am looking for by opening a virtual android device.
Virtual android device for Kivy application
0
0
0
513
48,593,824
2018-02-03T04:06:00.000
3
0
1
0
python,pip,pycharm,flask-mail
48,593,919
1
false
1
0
If you are using Pycharm, an alternative way to install an extension is the following: File --> Settings --> Project: "Your project name" --> Project Interpreter Click the '+' icon and search for 'Flask-Mail' Click Install Package
1
1
0
I have a problem in installing extenstions like flask-mail when using pip install ; when I wrote the command there is no error but nothing happened .what is the problem?
Pycharm extension installation
0.53705
0
0
359
48,594,395
2018-02-03T05:54:00.000
3
0
1
0
python,django,virtualenv
48,594,464
4
true
1
0
It is not essential but it is recommended to work in a virtual environment when you start working on Django projects. Importance of the virtual environment. A virtual environment is a way for you to have multiple versions of python on your machine without them clashing with each other, each version can be considered as a development environment and you can have different versions of python libraries and modules all isolated from one another. In a most simple way, a virtual environment provides you a development environment independent of the host operating system. You can install and use the necessary softwares in the /bin folder of the virtualenv, instead of using the software installed on the host machine. Many a time different projects need different versions of the same package and keeping every projects in seperate virtual environements helps a lot. It is strongly recommended to set up separate virtualenv for each project. Once you are used to it, it will seem fairly trivial and highly useful for development, removing a lot of future headaches.
4
1
0
For the first time, I made an environment for Django with virtualbox and vagrant (CentOS 7). But every tutorial I saw says I need to use pyenv or virtualenv. I think they are used to make a virtual environment for Django. But I don't know why I need to use pyenv or virtualenv. (For example, Cakephp3 doesn't need packages like pyenv or virtualenv.) And I was using virtualbox and vagrant, which are already virtual environment, so I think I was making virtual environment inside another virtual environment. I am not sure if this is meaningful. Maybe pyenv and virtualenv are not necessary if I am using virtual environment like virtualbox or vmware? Are they essential for Django? When I deploy Django in actual server, do I still need to use pyenv or virtualenv?
Is pyenv or virtualenv essential for Django?
1.2
0
0
717
48,594,395
2018-02-03T05:54:00.000
2
0
1
0
python,django,virtualenv
48,594,552
4
false
1
0
No, it's not essential to use virtualenv for Django but it's recommended because it isolates the multiple versions of python or libraries you are using for your projects on your system. If you are not using virtualenv then that library will be the part of your python home directory. For Example: If you are using version 1 of some library for one project and later on in other projects you have to use or if it's the requirement to use version 2 of that library then using specific virtualenv (if you are working on multiple projects simultaneously) for that project enables you to use multiple libraries without any problem.
4
1
0
For the first time, I made an environment for Django with virtualbox and vagrant (CentOS 7). But every tutorial I saw says I need to use pyenv or virtualenv. I think they are used to make a virtual environment for Django. But I don't know why I need to use pyenv or virtualenv. (For example, Cakephp3 doesn't need packages like pyenv or virtualenv.) And I was using virtualbox and vagrant, which are already virtual environment, so I think I was making virtual environment inside another virtual environment. I am not sure if this is meaningful. Maybe pyenv and virtualenv are not necessary if I am using virtual environment like virtualbox or vmware? Are they essential for Django? When I deploy Django in actual server, do I still need to use pyenv or virtualenv?
Is pyenv or virtualenv essential for Django?
0.099668
0
0
717
48,594,395
2018-02-03T05:54:00.000
1
0
1
0
python,django,virtualenv
48,594,571
4
false
1
0
VirtualBox isolates your development operating system from your normal operating system. Virtualenv isolates your project's Python packages from the system Python packages. Many Linux distributions install Python packages as dependencies for other software into the system-wide site-packages directory. Python doesn't have a way to version packages with the same name, so you will run into problems when your project depends on package==10.0.0 but your distribution already installed package==0.0.2 that you cannot upgrade without breaking something. Virtual environments are extremely lightweight. They're really just a new entry in your PATH environment variable and some configuration changes to make Python look only in a specific place for packages. There's no real downside to using a virtualenv beyond typing one extra command to activate it.
4
1
0
For the first time, I made an environment for Django with virtualbox and vagrant (CentOS 7). But every tutorial I saw says I need to use pyenv or virtualenv. I think they are used to make a virtual environment for Django. But I don't know why I need to use pyenv or virtualenv. (For example, Cakephp3 doesn't need packages like pyenv or virtualenv.) And I was using virtualbox and vagrant, which are already virtual environment, so I think I was making virtual environment inside another virtual environment. I am not sure if this is meaningful. Maybe pyenv and virtualenv are not necessary if I am using virtual environment like virtualbox or vmware? Are they essential for Django? When I deploy Django in actual server, do I still need to use pyenv or virtualenv?
Is pyenv or virtualenv essential for Django?
0.049958
0
0
717
48,594,395
2018-02-03T05:54:00.000
1
0
1
0
python,django,virtualenv
58,344,556
4
false
1
0
If you ever need to create other projects with different versions of python and with different versions of dependencies in the same virtualbox in future, then pyenv and virtualenv will save you from countless headaches.
4
1
0
For the first time, I made an environment for Django with virtualbox and vagrant (CentOS 7). But every tutorial I saw says I need to use pyenv or virtualenv. I think they are used to make a virtual environment for Django. But I don't know why I need to use pyenv or virtualenv. (For example, Cakephp3 doesn't need packages like pyenv or virtualenv.) And I was using virtualbox and vagrant, which are already virtual environment, so I think I was making virtual environment inside another virtual environment. I am not sure if this is meaningful. Maybe pyenv and virtualenv are not necessary if I am using virtual environment like virtualbox or vmware? Are they essential for Django? When I deploy Django in actual server, do I still need to use pyenv or virtualenv?
Is pyenv or virtualenv essential for Django?
0.049958
0
0
717
48,594,583
2018-02-03T06:22:00.000
1
0
1
0
python,anaconda,zsh
48,674,535
1
false
0
0
This is because the PATH environment variable has a different default value in zsh and bash. Modifiy PATH as follows: open zsh and create a file named ".zshrc" in home dir. add export PATH=$HOME/bin:/usr/local/bin:$PATH to .zshrc and save it load the new configuration by executing source .zshrc in zsh. (zsh) PATH is now equivalent to (bash) PATH and pip should return the same list in both shells.
1
0
0
I installed Anaconda a while ago and I'm very new to Python.. but I can't seem to figure this out. After the installation, everything seems to work okay, conda is a valid command however, when I do pip list in zsh and pip list in bash I get returned two very different lists. Bash displays everything that Anaconda came pre-packaged with and zsh does not, is there a way to get zsh to see all of those packages? I use zsh instead not bash Thanks!
Anaconda zsh pip list different from bash pip list Mac OS High Sierra
0.197375
0
0
145
48,594,892
2018-02-03T07:09:00.000
0
0
1
1
python,linux,python-3.x,security,computer-forensics
48,618,567
1
false
0
0
I don't know whether I am correct or not, but RAM memory contains data of executable image which includes opcodes and other data available as image section data. A script is not self-executable. It needs some interpreter to execute.(For example cmd.exe for .bat script,php.exe for php, perl.exe for perl script).These application (a) either generate the compiled instruction of script and then execute it or (b) generate opcodes then run it. However, your problem is about malicious script. You can use a monitoring app to detect what interpreter runs. This type of app also notify you when any script wants to be executed.
1
1
0
If a python script is executed in Linux, it will go to RAM and will be executed from there. Once it has executed and completed its task. Will python script code be immediately removed from RAM or will it stay their until computer needs that RAM space for any other task? If someone runs a malicious python script and suppose RAM space is not required for any other task, then using RAM dump I want to get that python script. Its part of a research work, I only want answer to the question mentioned above.
Fetch python script from RAM dump
0
0
0
418
48,595,683
2018-02-03T09:01:00.000
0
0
1
0
python,sql,sql-server
48,687,137
2
false
0
0
First install the MySQL connector for python from the MySQL website and then import the mysql.connector module and initialize a variable to a mysql.connector.connect object and use cursors to modify and query data. Look at the documentation for more help. If you don't have problem with no networking capabilities and less concurrency use sqlite, it is better than MySQL in these cases.
1
1
0
I have some experience with SQL and python. In one of my SQL stored procedures I want to use some python code block and some functions of python numpy. What is the best way to do it.SQL server version is 2014.
how to use python inside SQL server 2014
0
1
0
662
48,595,802
2018-02-03T09:18:00.000
0
0
0
0
python,tensorflow
48,595,998
2
false
0
0
One possibilty would be to use x.eval() which gives you a numpy array back, then use numpy.unique(...)
1
1
1
Suppose I have a tensor, x = [1, 2, 6, 6, 4, 2, 3, 2] I want to find the index of the first occurrence of every unique element in x. The output should be [0, 1, 6, 4, 2]. I basically want the second output of numpy.unique(x,return_index=True). This functionality doesn't seem to be supported in tf.unique. Is there a workaround to this in tensorflow, without using any loops?
Tensorflow: Finding index of first occurrence of elements in a tensor
0
0
0
778
48,597,046
2018-02-03T11:52:00.000
1
0
1
0
python,numpy
48,597,635
3
false
0
0
As the other answers make clear, numpy is technically a package (a directory of importable things), but in this case I think the sentence you cite is using the term in its other sense: A package is a thing you can install. PyPI is the Python Package Index, and pip stands for Pip Installs Packages. Both PyPI and pip can deal with things that are single-file modules. Package in this case is the general term for anything installable into your Python environment.
2
1
0
I'm studying Python as beginner, please just don't blast me. I've just studied that a module in Python is a collection of classes and functions; a package, instead, is just a way to identify modules in directories and subdirectories. So, a package in Python should not contain any classes and functions, and NumPy should calle d "module". Am I correct ? The fact is that official doc for NumPy says : NumPy is the fundamental package for scientific computing with Python
Python - Considering the official definition Numpy should be a module
0.066568
0
0
87
48,597,046
2018-02-03T11:52:00.000
1
0
1
0
python,numpy
48,597,358
3
false
0
0
These terms are often used quite vaguely, but in theory, yes - a module is a collection of classes and functions, while a package is a collection of (one or more) modules. However there are very few cases where a package just contains a module and no supporting code - since any package may want to provide e.g. __version__, __all__ etc, or with subpackages offer methods that provide helper functions relating to imports etc. So numpy is definitely a package, since it includes several sub-packages (doc, random, fft etc). Of course, it is also a module since it has 'top-level' classes and functions (e.g. numpy.array).
2
1
0
I'm studying Python as beginner, please just don't blast me. I've just studied that a module in Python is a collection of classes and functions; a package, instead, is just a way to identify modules in directories and subdirectories. So, a package in Python should not contain any classes and functions, and NumPy should calle d "module". Am I correct ? The fact is that official doc for NumPy says : NumPy is the fundamental package for scientific computing with Python
Python - Considering the official definition Numpy should be a module
0.066568
0
0
87
48,597,254
2018-02-03T12:16:00.000
0
0
1
0
python,class,constructor
48,597,348
1
true
0
0
There isn't a construct like this in Python.
1
1
0
Open to suggestions for a better title... If I remember this correctly, in Visual Basic you could assign multiple variables in a tree of some object with a 'with' constructor like this With Foo_Object.Foo_method .Bar_var1='foo' .Bar_var2='bar' End With Is there a similar construct in Python ?
assign multiple values to variables in a subclass of the same class
1.2
0
0
64
48,599,891
2018-02-03T17:13:00.000
2
1
0
0
java,php,python,eclipse
48,876,177
1
false
0
0
RSE is a very poor solution, as you noted it's a one-shot sync and is useless if you want to develop locally and only deploy occasionally. For many years I used the Aptana Studio suite of plugins which included excellent upload/sync tools for individual files or whole projects, let you diff everything against a remote file structure over SFTP when you wanted and exclude whatever you wanted. Unfortunately, Aptana is no longer supported and causes some major problems in Eclipse Neon and later. Specifically, its editors are completely broken, and they override the native Eclipse editors, opening new windows that are blank with no title. However, it is still by far the best solution for casual SFTP deployment...there is literally nothing else even close. With some work it is possible to install Aptana and get use of its publishing tools while preventing it from destroying the rest of your workspace. Install Aptana from the marketplace. Go to Window > Preferences > Install/Update, then click "Uninstall or update". Uninstall everything to do with Aptana except for Aptana Studio 3 Core and the Aptana SecureFTP Library inside that. This gets rid of most, but not all of Aptana's editors, and the worst one is the HTML editor which creates a second HTML content type in Eclipse that cannot be removed and causes all kinds of chaos. But there is a workaround. Exit Eclipse. Go into the eclipse/plugins/ directory and remove all plugins beginning with com.aptana.editor.* EXCEPT FOR THE FOLLOWING which seem to be required: com.aptana.editor.common.override_1.0.0.1351531287.jar com.aptana.editor.common_3.0.3.1400201987.jar com.aptana.editor.diff_3.0.0.1365788962.jar com.aptana.editor.dtd_3.0.0.1354746625.jar com.aptana.editor.epl_3.0.0.1398883419.jar com.aptana.editor.erb_3.0.3.1380237252.jar com.aptana.editor.findbar_3.0.0.jar com.aptana.editor.idl_3.0.0.1365788962.jar com.aptana.editor.text_3.0.0.1339173764.jar Go back into Eclipse. Right-clicking a project folder should now expose a 'Publish' option that lets you run Aptana's deployment wizard and sync to a remote filesystem over SFTP. Hope this helps...took me hours of trial and error, but finally everything works. For the record I am using Neon, not Oxygen, so I can't say definitively whether it will work in later versions.
1
1
0
I'm coming from NetBeans and evaluating others and more flexible IDEs supporting more languages (i.e. Python) than just php and related. I kept an eye on Eclipse that seems to be the best choice; at the time I was not able to find an easy solution to keep the original project on my machine and automatically send / syncronize the files on the remove server via sftp. All solutions seems to be outdated or stupid (like mounting a smb partition or manually send the file via an ftp client! I'm not going to believe that an IDE like Eclipse doesn't have a smart solution of what I consider a basic feature of an IDE, so I think I missed something... On Eclipse forums I've seen the same question asked lots of time but without any answer! Some suggestions about is strongly apreciated otherwise I think the only solution is stick on one IDE each language I use that seem to be incredible on 2018. I'm developing on MacOS and the most interesting solution (kDevelop) fails on building with MacPorts. Thank you very much.
Eclipse Oxygen: How to automatically upload php files on remote server
0.379949
0
1
1,140
48,600,277
2018-02-03T17:54:00.000
0
0
0
0
python,r,statistics,cluster-analysis
48,607,299
1
true
0
0
Define "better". What you do seems to be related to "leader" clustering. But that is a very primitive form of clustering that will usually not yield competitive results. But with 1 million points, your choices are limited, and kmeans does not handle categorical data well. But until you decide what is 'better', there probably is nothing 'wrong' with your greedy approach. An obvious optimization would be to first split all the data based on the categorical attributes (as you expect them to match exactly). That requires just one pass over the data set and a hash table. If your remaining parts are small enough, you could try kmeans (but how would you choose k), or DBSCAN (probably using the same threshold you already have) on each part.
1
0
1
I have a task to find similar parts based on numeric dimensions--diameters, thickness--and categorical dimensions--material, heat treatment, etc. I have a list of 1 million parts. My approach as a programmer is to put all parts on a list, pop off the first part and use it as a new "cluster" to compare the rest of the parts on the list based on the dimensions. As a part on the list matches the categorical dimensions and numerical dimensions--within 5 percent--I will add that part to the cluster and remove from the initial list. Once all parts in the list are compared with the initial cluster part's dimensions, I will pop the next part off the list and start again, populating clusters until no parts remain on the original list. This is a programmatic approach. I am not sure if this is most efficient way of categorizing parts into "clusters" or if k-means clustering would be a better approach.
Difference between k-means clustering and creating groups by brute force approach
1.2
0
0
161
48,600,583
2018-02-03T18:27:00.000
0
0
1
0
python,node.js,rest,api,web-services
48,600,936
1
false
1
0
In terms of thoughts: 1) You can build a REST interface to your python code using Flask. Make REST calls from your nodejs. 2) You have to decide if your client will wait synchronously for the result. If it takes a relatively long time you can use a web hook as a callback for the result.
1
0
0
I have the bulk of my web application in React (front-end) and Node (server), and am trying to use Python for certain computations. My intent is to send data from my Node application to a Python web service in JSON format, do the calculations in my Python web service, and send the data back to my Node application. Flask looks like a good option, but I do not intend to have any front-end usage for my Python web service. Would appreciate any thoughts on how to do this.
Python web service with React/Node Application
0
0
1
415
48,606,334
2018-02-04T08:51:00.000
2
0
1
0
python-3.x,range
48,606,357
3
false
0
0
i is used across nearly all programming languages to indicate a counting variable for a iteration loop.
1
2
0
I'm new to programming and Python is my first language of choice to learn. I think it's generally very easy and logical and maybe that's why this minor understanding-issue is driving me nuts... Why is "i" often used in learning material when illustrating the range function? Using a random number just seems more logical to me when the range function is dealing with numbers.. Please release me from my pain.
Why an "i" in "for i in range"?
0.132549
0
0
36,278
48,606,717
2018-02-04T09:40:00.000
1
0
0
1
python,shell,subprocess,confirmation,taskwarrior
48,645,932
2
false
0
0
already added confirmation=no to ~/.taskrc a possible answer to the question could be echo 'all' | task del 1,2,3 but there could be a better one
1
4
0
In taskwarrior there are some commands which need confirmirmation, like deleting more than 2 tasks or modify recurring tasks. I don't want to confirm every time and I have already set "confirmation off" in file .taskrc . I'm using the subprocess module in python to invoke taskwarrior commands. I'm calling for example task del 1,2,3 and the shell waits now for a manual confirmation of the deletion request. How can I avoid the manual confirmation?
taskwarrior "task del" without confirmation dialog
0.099668
0
0
512
48,606,935
2018-02-04T10:07:00.000
1
0
1
0
python
48,607,001
3
false
0
0
You are wrong that it is an array of a specific type. A Python list is implemented as an array of references to (directly or indirectly, memory addresses of) other Python objects. There is more to it than that because there is a lot of housekeeping stuff as well.
1
1
0
According to resources I have read, list data structure in python is implemented with arrays. However, python list can contain different types of objects. If it's underlying data structure is array, how can the array contain different types of objects ? Am I wrong that the array is of specific data type ?
Underlying Structure of List in Python
0.066568
0
0
916
48,608,845
2018-02-04T14:02:00.000
0
0
1
0
python,multithreading,concurrency,python-multithreading,gevent
57,702,150
2
false
1
0
The python thread is the OS thread which controlled by the OS which means it's a lot heavier since it needs context switch, but the green thread is lightweight and since it's in userspace the OS does not create or manage them. I think you can use gevent, Gevent = eventloop(libev) + coroutine(greenlet) + monkey patch, Gevent give you threads but without using threads with that you can write normal code but have async IO. Make sure you don't have CPU bound stuff in your code.
1
1
0
I'm trying to decide if I should use gevent or threading to implement concurrency for web scraping in python. My program should be able to support a large (~1000) number of concurrent workers. Most of the time, the workers will be waiting for requests to come back. Some guiding questions: What exactly is the difference between a thread and a greenlet? What is the max number of threads \ greenlets I should create in a single process (with regard to the spec of the server)?
Python Threading vs Gevent for High Volume Web Scraping
0
0
1
1,130
48,609,095
2018-02-04T14:26:00.000
4
0
0
0
python,plot,openstreetmap,networkx
48,799,838
1
false
0
0
Just set the annotate parameter to True in the osmnx.plot.plot_graph() module.
1
2
1
Now I can plot the map using osmnx, but I don't know the node id of nodes in the figure. That is how can I associate the node id and the node plot in the figure? Or, Can I plot the map with node id? Thanks!
Get specific node id from the figure plot by osmnx
0.664037
0
0
469
48,611,425
2018-02-04T18:10:00.000
5
0
0
0
python,socket.io,webserver,flask-socketio,eventlet
48,616,158
2
true
1
0
The eventlet web server supports concurrency through greenlets, same as gevent. No need for you to do anything, concurrency is always enabled.
1
2
0
I’ve started working a lot with Flask SocketIO in Python with Eventlet and are looking for a solution to handle concurrent requests/threading. I’ve seen that it is possible with gevent, but how can I do it if I use eventlet?
Handle concurrent requests or threading Flask SocketIO with eventlet
1.2
0
1
2,111
48,612,009
2018-02-04T19:14:00.000
1
0
1
0
python,python-3.x,visual-studio,visual-studio-2017
48,612,065
1
true
0
0
For me it works using Ctrl+K,Ctrl+K (Toggling Bookmark) when sitting in a line of my python file. Jumping to it via Ctrl+Shift+K,Ctrl+Shift+P works as well. (Vanilla VS Community Edition 2017). The menu items in Edit/Bookmarks/... work as well. After I added the TextEditor Menu Bar I was also able to put a bookmark via the button you mentioned.
1
1
0
Can you place bookmarks in a python program with Visual Studio 2017? I have tried using the 'Toggle bookmark on this line' button, but it does nothing. Is the problem related to the fact that Python uses white space to separate code?
Visual Studio 2017 cannot create bookmarks in python files?
1.2
0
0
123
48,612,105
2018-02-04T19:22:00.000
0
0
0
0
python,plotly-dash
48,779,917
1
true
1
0
I have solved it. It was due to miscoding. The code was not in the right place. Thanks.
1
2
1
I would like to have you sharing the template in Dash python where it generate a scatter charter with the help of plot.ly stunning graphics. I am unable to make my Dash framework running but I do the same flawlessly in Jupyter Notebook. Just was wondering if I could run it on Dash. Your help is much appreciated. Regards.
How to render a plot.ly Jupyter Notebook file into Dash python Dashboard
1.2
0
0
458
48,612,807
2018-02-04T20:40:00.000
1
0
0
1
python,google-app-engine,google-compute-engine
48,617,341
2
false
1
0
Yes, that's exactly what App Engine is designed for.
1
0
0
I have a website whose code contains a lot of computation (mostly in Python). Is there a way to perform those computations on Google App Engine every time and just return the results to the user?
Linking a website to Google App Engine
0.099668
0
0
38
48,617,220
2018-02-05T06:53:00.000
0
1
0
0
python,selenium-webdriver
69,818,465
1
false
0
0
I created library in python for logging info messages and screenshots in HTML file called selenium-logging There is also video explanation of package on youtube (25s) called "Python HTML logging"
1
1
0
I have created simple test cases using selenium web driver in python. I want to log the execution of the test cases at different levels. How do I do it? Thanks in advance.
Logging in selenium python
0
0
1
1,240
48,617,779
2018-02-05T07:39:00.000
0
0
0
1
python,mysql,django,pip,mysql-python
48,712,972
3
false
0
0
Turns out I had the wrong python being pointed to in my virtualenv. It comes preinstalled with its own default python version and so, I created a new virtualenv and used the -p to set the python path to my own local python path.
2
2
0
I am receiving the error: ImportError: No module named MySQLdb whenever I try to run my local dev server and it is driving me crazy. I have tried everything I could find online: brew install mysql pip install mysqldb pip install mysql pip install mysql-python pip install MySQL-python easy_install mysql-python easy_install MySQL-python pip install mysqlclient I am running out of options and can't figure out why I continue to receive this error. I am attempting to run my local dev server from using Google App Engine on a macOS Sierra system and I am using python version 2.7. I am also running: source env/bin/activate at the directory my project files are and am installing all dependencies there as well. My path looks like this: /usr/local/bin/python:/usr/local/mysql/bin:/usr/local/opt/node@6/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Frameworks/Python.framework/Versions/2.7/bin Does anyone have further ideas I can attempt to resolve this issue?
VirtualEnv ImportError: No module named MySQLdb
0
1
0
865
48,617,779
2018-02-05T07:39:00.000
0
0
0
1
python,mysql,django,pip,mysql-python
71,918,271
3
false
0
0
In my project, with virtualenv, I just did pip install mysqlclient and like magic everything is ok
2
2
0
I am receiving the error: ImportError: No module named MySQLdb whenever I try to run my local dev server and it is driving me crazy. I have tried everything I could find online: brew install mysql pip install mysqldb pip install mysql pip install mysql-python pip install MySQL-python easy_install mysql-python easy_install MySQL-python pip install mysqlclient I am running out of options and can't figure out why I continue to receive this error. I am attempting to run my local dev server from using Google App Engine on a macOS Sierra system and I am using python version 2.7. I am also running: source env/bin/activate at the directory my project files are and am installing all dependencies there as well. My path looks like this: /usr/local/bin/python:/usr/local/mysql/bin:/usr/local/opt/node@6/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Frameworks/Python.framework/Versions/2.7/bin Does anyone have further ideas I can attempt to resolve this issue?
VirtualEnv ImportError: No module named MySQLdb
0
1
0
865
48,620,524
2018-02-05T10:32:00.000
0
0
0
0
python,driver,pyqt5,spyder
48,680,952
2
true
0
1
This is not really a solution to the problem, but it's how I am able to use Spyder again. I downloaded and installed the latest Anaconda (Version 5.0.1) distribution. I am able to launch Spyder, which is included in that distribution, even with the docking station drivers still installed.
2
1
0
I am using Spyder through the WinPython-64bit-3.6.3.0Qt5 distribution on my Windows 7 Laptop. To add a third screen to my Laptop I started using a USB 3.0 Docking Station. After Installing the drivers Spyder won't launch, neither will the WinPython Control Panel. These are the drivers I installed: DisplayLink Graphics Driver Version: 8.0.762.0 DisplayLink Ethernet Driver Version: 8.0.403.0 DisplayLink Audio Driver Version: 8.0.745.0 Uninstalling the drivers fixes the problem, but my question is if this is a known issue and if there is a way to use both Spyder and the Docking Station. So far I have tried updating PyQt5, which had no effect.
Issues with WinPython after installing Docking Station Driver
1.2
0
0
99
48,620,524
2018-02-05T10:32:00.000
0
0
0
0
python,driver,pyqt5,spyder
48,666,300
2
false
0
1
I fixed mine. The issue was the way DisplayLink and Microsoft work together. Microsoft has admitted to there being an issue with the Windows kernel mode graphics driver. This is only an issue for machines with two+ graphics cards and the use of DisplayLink. DisplayLink uses the graphics card that Windows registers in the kernel, which by default is the primary monitor. On my HP laptop, the Intel graphics card is only used on the laptop screen (the primary) and is registered in the kernel, the NVIDIA card I have is used for all external screens, not registered. When you move an application, such as Spyder/Python, to the external monitors, the program's graphics will not work properly as you will be using DisplayLink, which is set to use the Intel graphics card, but you are now on the NVIDIA card. This does not happen to all apps, just those that rely heavily on the DisplayLink driver, these programs typically include GPU heavy programs such as emulators. To fix this, I disabled the Intel Graphics card, forcing the laptop to use the NVIDIA card and for the NVIDIA card to register with the kernel. This is not an ideal solution, but until Microsoft finds a fix for this, we are forced to use this method.
2
1
0
I am using Spyder through the WinPython-64bit-3.6.3.0Qt5 distribution on my Windows 7 Laptop. To add a third screen to my Laptop I started using a USB 3.0 Docking Station. After Installing the drivers Spyder won't launch, neither will the WinPython Control Panel. These are the drivers I installed: DisplayLink Graphics Driver Version: 8.0.762.0 DisplayLink Ethernet Driver Version: 8.0.403.0 DisplayLink Audio Driver Version: 8.0.745.0 Uninstalling the drivers fixes the problem, but my question is if this is a known issue and if there is a way to use both Spyder and the Docking Station. So far I have tried updating PyQt5, which had no effect.
Issues with WinPython after installing Docking Station Driver
0
0
0
99
48,622,996
2018-02-05T12:49:00.000
0
0
0
0
python,ajax,bootstrap-4,reload,tablesorter
48,707,139
1
false
1
0
Thanks, I finaly did it using following syntax: $("#my-table").trigger("update", [resort, dummyCallbackFunction]); Where resort is a boolean set to True or False and dummyCallFunction a function doing nothing in my case. It works perfectly.
1
0
0
I'm loading in background task (using ajax and Python api) a possibly large table by blocks of 25 rows. I'm using a bootstrap table sorter. I would like to be able to initialise table sorter after first block loaded and then refresh it after each block. If I simply initialise my table sorter on the first block loaded, filter and sort, it fails when a subsequent block is loaded. Is there a way to refresh/reload table sorter as soon as rows are added?
How to refresh tablesorter when rows are added?
0
0
0
31
48,625,618
2018-02-05T15:14:00.000
1
0
1
0
python,c++,git,synchronization,symlink
48,633,743
1
true
0
0
You can use symlinks if you aren't planning to use this on Windows. Don't use hooks or other approaches to copy the file in git, because then you have an unnecessary versioned copy that can get out of sync, unless you (as you mention) expect to need it to be deliberately out of sync. What I would recommend is to fix the root problem: “during publishing only this very subfolder is published”. What you need is a build process that copies the file where it is needed for the bindings to work, and then “publishes” the result. This can be a simple script or Makefile that copies the file and then does the "publish" operation. (Be sure to add the copy to .gitignore so it isn't accidentally added!)
1
1
0
I have a git repo for a single-header C/C++ library that has a certain .h file in the root. That's where the main code lives. Now I want to make some bindings for other programming languages like Python or Node.js. I would like to put each bindings in its own subfolder. Now the catch is that during publishing only this very subfolder is published. The .h file (located a few levels up and included by a relative path) is not published, and this breaks the installation of the bindings for almost every language. I can see the following options: Symlinks. This is very close to what I need, but they might be problematic on certain file systems, so I'd like to know if there are any reasonable alternatives. Hardlinks. AFAIK, not supported by Git at all. Git hook to ensure that files in the subfolders match the one at the top level, and update them if needed. But since Git hooks are not tracked by Git - some developers may omit the hooks and break it. A script to manually sync files. Pretty much like Git hooks, but more explicit. Might also allow to keep different versions of the .h file in different subfolders, and it would allow to update one bindings and not update the others for some time in the case of urgency. Maybe should come with a server-side hook or CI script to ensure the files match when tagged commit is pushed (e.g. when new version is released). What do you think would be the preferred solution, or are there any unnoticed pitfalls I have missed?
Git: keep two locations of the same file in sync
1.2
0
0
127
48,637,661
2018-02-06T07:30:00.000
0
0
0
0
python,xml,odoo,odoo-10,odoo-view
48,669,234
3
false
1
0
You can not apply a domain over that menu, you should remove this menus and use/create buttons to print Quotation or Order, you can apply domain over a button.
3
0
0
In Sale Module, We Create Sale Order and Quotation Reports. Both Reports are displayed in print button. We need to separate that reports based on Quotation and Sale Order. We Using same Model name for sale and Quotation but write different view file.
How to separate Print report in Odoo 10?
0
0
0
355
48,637,661
2018-02-06T07:30:00.000
1
0
0
0
python,xml,odoo,odoo-10,odoo-view
48,859,784
3
false
1
0
You try this code for create two different button with same code but change string and Id. <report id="report_sale_order" string="Quotation" model="sale.order" report_type="qweb-pdf" file="sale.report_saleorder" name="sale.report_saleorder" />
3
0
0
In Sale Module, We Create Sale Order and Quotation Reports. Both Reports are displayed in print button. We need to separate that reports based on Quotation and Sale Order. We Using same Model name for sale and Quotation but write different view file.
How to separate Print report in Odoo 10?
0.066568
0
0
355
48,637,661
2018-02-06T07:30:00.000
1
0
0
0
python,xml,odoo,odoo-10,odoo-view
48,960,462
3
false
1
0
In sale order and quotation reports, If you conform the quotation it will print the sale order report and does not print quotation report.So it's separated from each other.
3
0
0
In Sale Module, We Create Sale Order and Quotation Reports. Both Reports are displayed in print button. We need to separate that reports based on Quotation and Sale Order. We Using same Model name for sale and Quotation but write different view file.
How to separate Print report in Odoo 10?
0.066568
0
0
355
48,637,985
2018-02-06T07:52:00.000
1
1
0
0
python,audio,wav
51,028,661
1
true
0
0
Best Idea is to separate both the channels and then get the size of both, cause in wav files size is directly proportional to activity in that channel.
1
2
0
I have numerous audio wav files. Every audio files have 2 channels and each channel contains voices of 2 different persons so I want to get the channel where the activity is more or we can say the channel where the speaker is saying more and I have to delete the other channel. As of now I am doing this with audacity but can i do it via python or any terminal command in ubuntu?
Identify channels in a wav file?
1.2
0
0
825
48,641,625
2018-02-06T11:09:00.000
0
1
1
0
python,encryption,mmap,in-memory,data-protection
72,453,786
1
false
0
0
Can you elaborate a bit more what the "other interfaces" require? If the other interfaces just need a file-like object (read, write, etc), then you can use io.StringIO or io.BytesIO from the standard library.
1
11
0
Is there a way in python to get the path to the in-memory file, so it'll behave as a normal file for methods which needs file path? My objective is to protect the file, so avoiding dumping into /tmp. Trying to read an encrypted file -> decrypt a file into memory -> use its path for other interfaces. mmap does not provide path or filename. Or any alternative solution for this problem ?
Path to in-memory-file without dumping in tmp
0
0
0
3,170
48,644,272
2018-02-06T13:31:00.000
-2
0
0
0
python
48,644,352
2
false
0
0
Try array_name.encode("uint8").decode("uint8") if this worked, then you can use the decode() method
1
0
1
I have an array which has intensity values from -3000 to 1000,I have thresholded all values which are less then -100 to -100 and all values greater than 400 to 400,After which i convert it to datatype np.uint8,Which makes all the values in the array to have intensity values from 0 to 255, What i am confused about is,is there an operation i can run which will give the array from 0-255 intensity range to the earlier one -100 to 400? Any suggestions will be useful,Thanks in advance
Encode array to uint8 and then back
-0.197375
0
0
1,021
48,646,826
2018-02-06T15:46:00.000
1
0
0
0
python,rest,falconframework
48,646,870
1
true
1
0
You should only have one endpoint, not one for each type of user. What if you have moderators? Will you also create a /mods/login ? What each user should and shouldn't have access to should be sorted out with permissions.
1
1
0
I'm having a little trouble figuring out if I should have an API for admins and for users splitted. So: Admins should login using /admin/login with a POST request, and users just /login. Admins should access/edit/etc resources on /admin/resourceName and users just /resourceName.
Should I make an API for users and an API for admins?
1.2
0
1
40
48,647,589
2018-02-06T16:27:00.000
2
0
1
0
python,c++,string,algorithm,sorting
48,647,790
3
false
0
0
Concatenate your strings (not really needed, you can also count chars in the individual strings) Create an array with length equal to total nr of charcodes Read through your concatenated string and count occurences in the array made at step 2 By reading through the char freq array, build up an output array with the right nr of repetitions of each char. Since each step is O(n) the whole thing is O(n) [@patatahooligan: had made this edit before I saw your remark, accidentally duplicated the answer]
3
0
0
I recently was given a question in a coding challenge where I had to merge n strings of alphanumeric characters and then sort the new merged string while only allowing alphabetical characters in the sorted string. Now, this would be fairly straight forward except that the caveat added was that the algorithm had to be O(n) (it didn't specify whether this was time or space complexity or both). My initial approach was to concatenate the strings into a new one, only adding alphabetical characters and then sorting at the end. I wanted to come up with a more efficient solution but I was given less time than I was initially told. There isn't any sorting algorithm (that I know of) which runs in O(n) time, so the only thing I can think of is that I could increase the space complexity and use a sorted hashtable (e.g. C++ map) to store the counts of each character and then print the hashtable in sorted order. But as this would require possibly printing n characters n times, I think it would still run in quadratic time. Also, I was using python which I don't think has a way to keep a dictionary sorted (maybe it does). Is there anyway this problem could have been solved in O(n) time and/or space complexity?
Merging and sorting n strings in O(n)
0.132549
0
0
549
48,647,589
2018-02-06T16:27:00.000
1
0
1
0
python,c++,string,algorithm,sorting
48,647,724
3
false
0
0
If I've understood the requirement correctly, you're simply sorting the characters in the string? I.e. ADFSACVB becomes AABCDFSV? If so then the trick is to not really "sort". You have a fixed (and small) number of values. So you can simply keep a count of each value and generate your result from that. E.g. Given ABACBA In the first pass, increment a counters in an array indexed by characters. This produces: [A] == 3 [B] == 2 [C] == 1 In second pass output the number of each character indicated by the counters. AAABBC In summary, you're told to sort, but thinking outside the box, you really want a counting algorithm.
3
0
0
I recently was given a question in a coding challenge where I had to merge n strings of alphanumeric characters and then sort the new merged string while only allowing alphabetical characters in the sorted string. Now, this would be fairly straight forward except that the caveat added was that the algorithm had to be O(n) (it didn't specify whether this was time or space complexity or both). My initial approach was to concatenate the strings into a new one, only adding alphabetical characters and then sorting at the end. I wanted to come up with a more efficient solution but I was given less time than I was initially told. There isn't any sorting algorithm (that I know of) which runs in O(n) time, so the only thing I can think of is that I could increase the space complexity and use a sorted hashtable (e.g. C++ map) to store the counts of each character and then print the hashtable in sorted order. But as this would require possibly printing n characters n times, I think it would still run in quadratic time. Also, I was using python which I don't think has a way to keep a dictionary sorted (maybe it does). Is there anyway this problem could have been solved in O(n) time and/or space complexity?
Merging and sorting n strings in O(n)
0.066568
0
0
549
48,647,589
2018-02-06T16:27:00.000
2
0
1
0
python,c++,string,algorithm,sorting
48,647,735
3
true
0
0
Your counting sort is the way to go: build a simple count table for the 26 letters in order. Iterate through your two strings, counting letters, ignoring non-letters. This is one pass of O(n). Now, simply go through your table, printing each letter the number of times indicated. This is also O(n), since the sum of the counts cannot exceed n. You're not printing n letters n times each: you're printing a total of n letters.
3
0
0
I recently was given a question in a coding challenge where I had to merge n strings of alphanumeric characters and then sort the new merged string while only allowing alphabetical characters in the sorted string. Now, this would be fairly straight forward except that the caveat added was that the algorithm had to be O(n) (it didn't specify whether this was time or space complexity or both). My initial approach was to concatenate the strings into a new one, only adding alphabetical characters and then sorting at the end. I wanted to come up with a more efficient solution but I was given less time than I was initially told. There isn't any sorting algorithm (that I know of) which runs in O(n) time, so the only thing I can think of is that I could increase the space complexity and use a sorted hashtable (e.g. C++ map) to store the counts of each character and then print the hashtable in sorted order. But as this would require possibly printing n characters n times, I think it would still run in quadratic time. Also, I was using python which I don't think has a way to keep a dictionary sorted (maybe it does). Is there anyway this problem could have been solved in O(n) time and/or space complexity?
Merging and sorting n strings in O(n)
1.2
0
0
549
48,648,352
2018-02-06T17:09:00.000
1
0
1
1
python
51,906,635
2
true
0
0
To solve this I am created an Unix wrapper on top of my python script and source all the variables in that wrapper. I was able to import all the variables in python script.
1
1
0
I need to source an environment file/Unix script from Python script to fetch the variables and use those variables in Python script. I am able to source the environment file/Unix script but not able to fetch the value of the variable and use it. My env file (env_var.env)looks like:- export FLAG=YES; export TAG=A; I want to source this file in my python script and need to print and use the values of FLAG and TAG. Thanks in advance.
How to use variables in Python by sourcing a shell file
1.2
0
0
105
48,648,358
2018-02-06T17:09:00.000
1
0
0
0
python,django,e-commerce,sandbox,django-oscar
48,667,653
2
false
1
0
If its for your project, you should follow the docs. But you can always use it for references.
1
1
0
Is it okay to edit the existing django-oscar sandbox project as my project or is it better to follow the django-oscar documentation?
Django oscar sandbox editing or django oscar documentaion
0.099668
0
0
334
48,648,373
2018-02-06T17:10:00.000
1
0
0
0
python,django,mongodb,django-models,python-3.6
54,759,944
2
false
1
0
If you want multiple DBs (in Mongo itself) and need to switch DB for long time, then don't go for mongoengine. For simple interaction mongoengine is a nice option.
1
0
0
Can anyone explain me the difference between mongoengine and django-mongo-engine and pymongo. I am trying to connect to mongodb Database in Django2.0 and python3.6
Difference between mongoengine and django-mongo-engine and pymongo
0.099668
1
0
1,047
48,648,927
2018-02-06T17:42:00.000
0
1
1
0
python,pulp,coin-or-cbc
48,711,131
1
false
0
0
Actually Stu, this might work! - the "dummy var" in my case could be the source/sink node, which I can loosen the flow constraints on, allowing infinite flow in/out but with a large cost. This makes the solution feasible with a very bad high optimal cost; then the pricing of the new variables should work and show me which variables to add to the problem on the next iteration. I'll give it a try and report back. My only concern is that the bigM cost coefficient on the source/sink node may skew the pricing of the variables making all of them look relatively attractive. This would be counter productive bc adding most of the variables back into the problem will defeat the purpose of my column generation in the first place. I'll test it...
1
0
0
I am solving a minimization linear program using COIN-OR's CLP solver with PULP in Python. The variables that are included in the problem are a subset of the total number of possible variables and sometimes my pricing heuristic will pick a subset of variables that result in an infeasible solution. After which I use shadow prices to price new variables in. My question is, if the problem is infeasible, I still get values from calling prob.constraints[c].pi, but those values don't always seem to be "valid" or "good" per se. Now, a solver like Gurobi won't even let me call the shadow prices after an infeasible solve.
Are shadow prices valid after an infeasible solve from COIN using PULP
0
0
0
214
48,649,927
2018-02-06T18:49:00.000
0
0
0
1
eclipse,python-3.x,pandas,pydev
71,702,890
2
false
0
0
I faced same problem when working with eclipse pydev project. here are steps that helped me resolve the issue In eclipse open the workspace, click on Windows->Preferences->pydev-> interpreters-> python interpreter -> Click on Manage with PIP In Command to execute , enter install pandas. Problem solved
1
1
0
I have installed python 3.6.4 on my MAC OS and have Eclipse neon 2.0 running. I have added pydev plugin to work on python projects. I need to import pandas library, but there is no such option as windows -> preferences -> libraries Can someone help me with any other way to install python libraries in neon 2. And also how to run python cmds in terminal window in pydev? Thanks!
installing pandas in pydev eclipse neon 2
0
0
0
1,889
48,654,048
2018-02-07T00:17:00.000
1
1
1
1
python,terminal
48,654,132
4
false
0
0
Open command prompt, make sure you are in the right directory. If you are on windows you can simply hold shift and right click the folder and open command line here. If not you can just manually navigate to the directory by using the cd (change directory) command. Type python my_program.py test.txt and it will work.
2
0
0
I want to run my program if I type a specific command. My Python program should take the test file as a command line argument. For example in the command prompt or the terminal, my program should run if you type “python my_program.py test.txt”.
How to run a program by typing a specific command
0.049958
0
0
69
48,654,048
2018-02-07T00:17:00.000
1
1
1
1
python,terminal
48,654,102
4
false
0
0
use the execfile built-in function from python, you can google it, there are many answers as well on this site with this info. Make research before asking.
2
0
0
I want to run my program if I type a specific command. My Python program should take the test file as a command line argument. For example in the command prompt or the terminal, my program should run if you type “python my_program.py test.txt”.
How to run a program by typing a specific command
0.049958
0
0
69
48,654,148
2018-02-07T00:28:00.000
1
0
1
0
python
48,654,163
6
false
0
1
No. Not unless your script is loaded inside some wrapper code that in turn executes your file's run or execute function.
2
1
0
I oftentimes see classes containing def execute(self) and def run() Does python automatically pick this up like int main() in C++?
Why do classes have Def Run() and Def Execute()?
0.033321
0
0
10,001
48,654,148
2018-02-07T00:28:00.000
0
0
1
0
python
68,869,922
6
false
0
1
The run() method is used by the Thread and Process classes (in the threading and multiprocessing modules) and is the entry point to the object when you call the it's .start() method. If you've seen it outside of that usage then I would think that it's probably either contextual or coincidental. I'm not aware of any canonical execute() method.
2
1
0
I oftentimes see classes containing def execute(self) and def run() Does python automatically pick this up like int main() in C++?
Why do classes have Def Run() and Def Execute()?
0
0
0
10,001
48,654,727
2018-02-07T01:45:00.000
0
0
0
0
python,selenium,jenkins,automation,robotframework
48,679,101
4
false
1
0
You can workaround this by installing VNC server and run jenkins slave within it. Then browser will start with gui and test will work.
2
6
0
I am using Robotframework to create test scripts for an application. This application requires many keyboard click/action combinations that are unavoidable. Right now I am using the PyAutoGui library to simulate these actions and they are working fine, but when I run them through a headless browser on Jenkins, those actions are not registered. The error that I get is "PyAutoGUI fail-safe triggered from mouse moving to upper-left corner. To disable this fail-safe, set pyautogui.FAILSAFE to False." However, even after changing the Failsafe value to false, the keyboard action is still not captured. The weird thing is that if someone is physically logged into the Jenkins box while the tests are running, the library works perfectly fine, but when running headless, the library breaks. Is there another library I can possible use or a possible work around for this situation? Thanks in advance!
Is it possible use keyboard actions without using the "PyAutoGUI" library on jenkins?
0
0
0
3,159
48,654,727
2018-02-07T01:45:00.000
1
0
0
0
python,selenium,jenkins,automation,robotframework
48,655,247
4
false
1
0
I have been automating a lot of web applications at work. I too started with PyAutoGUI, and had similar problems that you are experiencing going from my laptop to our production server that was running the scripts. The solution I found was Selenium Webdriver. If what you are testing has an ip address, this may be the solution. In my opinion it is actually easier than PyAutoGUI.
2
6
0
I am using Robotframework to create test scripts for an application. This application requires many keyboard click/action combinations that are unavoidable. Right now I am using the PyAutoGui library to simulate these actions and they are working fine, but when I run them through a headless browser on Jenkins, those actions are not registered. The error that I get is "PyAutoGUI fail-safe triggered from mouse moving to upper-left corner. To disable this fail-safe, set pyautogui.FAILSAFE to False." However, even after changing the Failsafe value to false, the keyboard action is still not captured. The weird thing is that if someone is physically logged into the Jenkins box while the tests are running, the library works perfectly fine, but when running headless, the library breaks. Is there another library I can possible use or a possible work around for this situation? Thanks in advance!
Is it possible use keyboard actions without using the "PyAutoGUI" library on jenkins?
0.049958
0
0
3,159