Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
48,757,083
2018-02-12T23:11:00.000
0
0
0
0
python,nginx,flask,uwsgi
48,794,623
2
false
1
0
Finally I was able to solve this issue. Thanks to @Jeff Storey and @joppich. My proxy server was not able to read the response from by backend server which was causing this issue. I added an interceptor to catch all exceptions and propagate it to NGINX via uwsgi using the nginx proxy_intercept_errors on;. Big thanks Jeff and joppich. Appreciate your help.
2
0
0
I am running a Flask application and using uwsgi socket nginx configuration. This may be a stupid question but the issue I am facing is that whenever there is exception raised in Flask code (non handled exception for example 1/0), my nginx gives 502 instead of 500. I wanted to know if raising the exception is not getting propagated to nginx as 500 from uwsgi unix socket by default or do I need to specify this explicitly? Somewhere I read that for exceptions, Flask doesn't raise 500 error message automatically. Any comments will be helpful. Thanks.
nginx gives 502 for non handled exception 500 error in flask
0
0
0
1,072
48,757,970
2018-02-13T00:58:00.000
0
0
1
0
tensorflow,ipython
48,777,636
1
false
0
0
I think I figured out the problem. pip was pointing to /Library/Frameworks/Python.framework/Versions/3.4/bin/pip My ipython was pointing to /opt/local/bin/ipython I re-installed tensorflow within my virtual environment by calling /opt/local/bin/pip-2.7 install --upgrade tensorflow Now I can use tensorflow within ipython.
1
0
1
tensorflow works using python in a virtualenv I created, but tensorflow doesn't work in the same virtualenv with ipython. This is the error I get: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name mock was given, but was not able to be found. I have tried installing ipython within the virtual environment. This is the message I get: Requirement already satisfied: ipython in /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages If I try to uninstall ipython within the virtual environment. I get this message: Not uninstalling ipython at /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages Any ideas on how to get this to work? I don't know how to force the install of ipython to be inside the virtual environment. I've tried deleting the virtual environment and making a new one from scratch, but I get the same error.
Running tensorflow in ipython
0
0
0
213
48,759,535
2018-02-13T04:35:00.000
0
0
0
0
python,tensorflow,computer-vision
48,761,331
1
true
0
0
The scores argument decides the sorting order. The method tf.image.non_max_suppression goes through (greedily, so all input entries are covered) input bounding boxes in order decided by this scores argument, selects only those bounding boxes from them which are not overlapping (more than iou_threshold) with boxes already selected. NMS first look at bottom right coordinate and sort according to it and calculate IoU... This is not correct, can you site any resource which made you think this way?
1
1
1
I read the document about the function and I understood how NMS works. What I'm not clear is scores argument to this function. I think NMS first look at bottom right coordinate and sort according to it and calculate IoU then discard some boxes which have IoU greater than the threshold that you set. In this theory scores argument does absolutely nothing and the document doesn't tell much about scores arguments. I want to know how the argument affect the function. Thank you.
What does tensorflow nonmaximum suppression function's argument "score" do to this function?
1.2
0
0
537
48,761,144
2018-02-13T07:00:00.000
1
0
0
0
python-3.x,tensorflow,computer-vision,softmax,sigmoid
48,762,415
1
true
0
0
Since you're doing single label classification, softmax is the best loss function for this, as it maps your final layer logit values to a probability distribution. Sigmoid is used when it's multilabel classification. It's always better to use a momentum based optimizer compared to vanilla gradient descent. There's a bunch of such modified optimizers like Adam or RMSProp. Experiment with them to see what works best. Adam is probably going to give you the best performance. You can add an extra label no_class, so your task will now be a 6+1 label classification. You can feed in some random images with no_class as the label. However the distribution of your random images must match the test image distribution, else it won't generalise.
1
1
1
I am doing transfer-learning/retraining using Tensorflow Inception V3 model. I have 6 labels. A given image can be one single type only, i.e, no multiple class detection is needed. I have three queries: Which activation function is best for my case? Presently retrain.py file provided by tensorflow uses softmax? What are other methods available? (like sigmoid etc) Which Optimiser function I should use? (GradientDescent, Adam.. etc) I want to identify out-of-scope images, i.e. if users inputs a random image, my algorithm should say that it does not belong to the described classes. Presently with 6 classes, it gives one class as a sure output but I do not want that. What are possible solutions for this? Also, what are the other parameters that we may tweak in tensorflow. My baseline accuracy is 94% and I am looking for something close to 99%.
Image Classification using Tensorflow
1.2
0
0
124
48,762,723
2018-02-13T08:49:00.000
4
0
1
0
python,python-2.7
49,032,416
1
true
0
0
The data provider is the connection to the underlying file or database that holds the geospatial information to be displayed. In QGIS, a data provider (instance of qgis.core.QgsVectorDataProvider) allows the vector/raster layer to access the features within the data source. It includes a geometry type (stored in the data source), a list of fields that provide information about the attributes stored for each feature, and the ability to explore features within the data source (using getFeatures() method and QgsFeatureRequest class). You can access the various data providers using the core.QgsProviderRegistry class.
1
2
0
I'm a beginner in PyQGIS and I've come across the dataProvider() function a few times now. The problem is I don't know what it does and what it's for. I have been searching everywhere for its meaning and use/s. Any help is appreciated :)
What is dataProvider() in PyQGIS and what does it do?
1.2
0
0
2,025
48,764,814
2018-02-13T10:40:00.000
2
0
0
1
python-3.x,user-interface,rdp,pywinauto
49,934,460
1
true
0
0
It is not possible to automate a RDP window using pywinauto as RDP window itself is an image of a desktop. Printing control identifiers of the RDP window gives the UI of the screen. Solution is to install python+pywinauto in the remote machine.
1
0
0
Can anyone of you help me with an automation task which involves connecting through rdp and automating certain task in a particular application which is stored in that server. I have found scripts for rdp connection and for Windows GUI automation seperately. But in the integration, I have become a bit confused. It will be great if anyone can help me with the python library name :)
GUI Automation in RDP
1.2
0
1
1,322
48,766,455
2018-02-13T12:05:00.000
0
0
0
0
python,pyqt,squish
52,899,603
3
false
0
1
Adding TimeoutMilliseconds did not work, so I added time.sleep(Seconds) and this worked for me better.
1
1
0
I am using Squish 6.3 Qt. The application i am testing contains a QLabel whose content changes dynamically. Is it possible to wait for the label to be set to a particular value? I can't use waitForObject as the object always exists and only its text value keeps changing.
Wait for an object property to be set in squish
0
0
0
2,137
48,766,723
2018-02-13T12:20:00.000
0
0
0
0
python,selenium,import
48,767,767
1
false
0
0
Is it possible that you're using e.g. Python 3 for your project, and selenium is installed for e.g. Python 2? If that is the case, try pip3 install selenium
1
0
0
I have a python project with Selenium that I was working on a year ago. When I came back to work on it and tried to run it I get the error ImportError: No module named selenium. I then ran pip install selenium inside the project which gave me Requirement already satisfied: selenium in some/local/path. How can I make my project compiler (is that the right terminology?) see my project dependencies?
Import error "No module named selenium" when returning to Python project
0
0
1
141
48,767,750
2018-02-13T13:18:00.000
0
0
0
0
python,opencv,image-processing,python-imaging-library
48,771,960
1
false
0
0
There are two common methods: bilinear interpolation, bicubic interpolation. These evaluate an intermediate value, based on the values at four or sixteen neighboring pixels, using weighting functions based on the fractional parts of the coordinates. Lookup these expressions. From my experience, the bilinear quality is often sufficient.
1
1
1
I have an image and and am transforming it with a nonlinear spatial transformation. I have a written a function that, for every pixel (i, j) in the destination image array, returns a coordinate (y, x) in the source array. The returned coordinate is a floating point value, meaning that it corresponds to a point that lies between the pixels in the source image. Does anyone know if there an established method in PIL or opencv to interpolate the value of this subpixel, or should I roll my own? Thanks!
estimation of subpixel values from images in Python
0
0
0
1,342
48,769,149
2018-02-13T14:31:00.000
0
0
1
0
python,search,data-structures
48,769,576
1
false
0
0
I realized that I misread the assignment. It said: "Important note: Make sure to use the Stack, Queue and PriorityQueue data structures provided to you in util.py! These data structure implementations have particular properties which are required for compatibility with the autograder." I had misread it as saying that I need to use all of them, when it is really saying that if I want to use them I should use their version.
1
1
1
I'm working on a project from the Berkeley AI curriculum, and they require me to use stacks, queues, and priority queues in my Depth First Graph Search implementation. I stored my fringe in a priority queue and my already visited states in a set. What am I supposed to use stacks and queues for in this assignment? I'm not a student at Berkeley and I'm just using their curriculum for an independent study in high school and I got permission from my instructor to ask this online, so this is not a case of cheating on homework.
Why would I need stacks and queues for Depth First Search?
0
0
0
86
48,769,882
2018-02-13T15:10:00.000
6
0
0
0
python,pygame
48,770,582
1
true
0
1
You must first understand how pygame.display.flip and pygame.display.update work. When the screen mode pygame.DOUBLEBUF is set, Pygame actually maintains two screens: the active screen which is presently displayed and a buffer which you (the programmer) can update behind the scenes (without the user seeing anything). Once you are done with your edits on the buffer, you can use pygame.display.flip to switch the active screen with the buffer. The entire screen is updated. This is the recommended way to update the entire screen. Also, this is the only way to update non-software screens (OPENGL and Hardware accelerated screens for example). pygame.display.update on the other hand treats the screen as a group of pixels (that's called a software screen). This allows a Pygame program to update only a portion of the screen. This is faster as only a portion of the screen needs to be modified. Now, if the entire screen is to be updated (pygame.display.flip and pygame.display.update without any arguments) pygame.display.flip is faster. Remember, I said OpenGL and HW-accelerated screens (SOFT-screens too) maintain a buffer. Drawing to this buffer is slow, but flipping is very fast (in HW-screens and OpenGL). Updating the entire screen using pygame.display.update is even slower as it does things pixel by pixel and without HW-acceleration.
1
0
0
I do not understand what the difference is between pygame.display.update() and pygame.display.flip(). I have tried both and it seems that update() is slower than flip()... EDIT: My question is why update() with no parameters is much slower than flip(). Thanks!
Why is pygame.display.update() slower than pygame.display.flip()?
1.2
0
0
1,482
48,770,542
2018-02-13T15:46:00.000
36
0
0
0
python,pandas,csv,pickle
62,222,676
2
false
0
0
csv ✅human readable ✅cross platform ⛔slower ⛔more disk space ⛔doesn't preserve types in some cases pickle ✅fast saving/loading ✅less disk space ⛔non human readable ⛔python only Also take a look at parquet format (to_parquet, read_parquet) ✅fast saving/loading ✅less disk space than pickle ✅supported by many platforms ⛔non human readable
1
31
1
I am learning python pandas. I see a tutorial which shows two ways to save a pandas dataframe. pd.to_csv('sub.csv') and to open pd.read_csv('sub.csv') pd.to_pickle('sub.pkl') and to open pd.read_pickle('sub.pkl') The tutorial says to_pickle is to save the dataframe to disk. I am confused about this. Because when I use to_csv, I did see a csv file appears in the folder, which I assume is also save to disk right? In general, why we want to save a dataframe using to_pickle rather than save it to csv or txt or other format?
What is the difference between save a pandas dataframe to pickle and to csv?
1
0
0
25,209
48,770,786
2018-02-13T15:59:00.000
4
0
0
0
python,numpy
48,770,832
2
true
0
0
A linear space. So in other words, from a straight line over an interval we take n samples.
1
4
1
I'm learning python and numpy. The docstring of numpy.linspace says Return evenly spaced numbers over a specified interval. Returns num evenly spaced samples, calculated over the interval [start, stop]. So I guess the "space" part of linspace means "space". But what does "lin" stand for?
Why linspace was named like that in numpy?
1.2
0
0
672
48,772,017
2018-02-13T17:05:00.000
0
0
1
0
java,python
48,772,107
4
false
1
0
Take a look at Collections in java. There are many lists (ArrayList, LinkedList etc). Choose the best datastructure needed for the requirement and complexity (both space and time).
1
10
0
In Python there is a data structure called 'List'. By using 'List' data structure in Python we can append, extend, insert, remove, pop, index, count, sort, reverse. Is there any similar data structure in Java where we can get all that function like Python List?
Java equivalent of Python List
0
0
0
11,087
48,772,583
2018-02-13T17:39:00.000
1
0
1
0
python,google-cloud-platform,google-cloud-functions
50,966,006
4
false
0
0
You can use AWS lambda as well if you want to work around and still use Python as your main language. Some modules/packages will need to be imported via zip file with AWS Lambda but it has a broader range of usable languages than GCF
1
10
0
Can Google Cloud Functions handle python with packages like sklearn, pandas, etc? If so, can someone point me in the direction of resources on how to do so. I've been searching a while and it seems like this is impossible, all I've found are resources to deploy the base python language to google cloud.
Python in Google Cloud Functions
0.049958
0
0
12,755
48,775,903
2018-02-13T21:25:00.000
4
0
1
0
python
48,775,915
2
true
0
0
No, it does not involve multiprocessing at all. And neither does it involve threading at all, zip is acting entirely in the current thread. However, the zip is lazy (in the current version of Python), meaning elements will not be evaluated until you iterate a zip instance.
2
2
0
I'm wondering if Python's zip() function is executed in a multi-processing way, or it is actually done by a single thread, and then simply combines the results together?
Is Python's zip() function executed in a multi-processing way?
1.2
0
0
373
48,775,903
2018-02-13T21:25:00.000
1
0
1
0
python
48,775,948
2
false
0
0
Nope it is a single process . It is done in a single thread
2
2
0
I'm wondering if Python's zip() function is executed in a multi-processing way, or it is actually done by a single thread, and then simply combines the results together?
Is Python's zip() function executed in a multi-processing way?
0.099668
0
0
373
48,776,116
2018-02-13T21:41:00.000
0
1
1
0
python,coverage.py
48,830,576
3
true
0
0
Since coverage.py does not provide this feature, my solution was to write a small ast based function that calculate ghost hit points and remove them from the the coverage.py results
1
0
0
I have a python program that imports other files which potentially import other files, as is normal with python development The problem is, when I measure coverage with coverage.py, some files which are imported but not used, get coverage "hits" on the def and import statements. My question is: is there are way to avoid those hits? For my particular application these hits are considered noise.
With coverage.py, how to skip coverage of import and def statements
1.2
0
0
1,816
48,776,853
2018-02-13T22:38:00.000
2
0
1
0
python,python-3.x,image-scanner,python-importlib
48,776,883
1
false
0
0
importlib is builtin with Python 3 (at least for me), you can import it directly without installing anything. The error from pip install is possibly due to importlib is builtin and there's no distribution that's publicly available.
1
3
0
I am trying to pip install importlib with python 3.6, but I get an Import Error saying: 'NO Module named "importlib.util"'. This also comes up when I try to pip install imagescanner, which is my real intention. Building an App that connects to Image Scanner Devices, but that's another problem... Thanks for any help!
What is importlib.util in python3?
0.379949
0
0
6,403
48,779,478
2018-02-14T04:04:00.000
0
0
0
0
python-3.x,wxwidgets
48,779,658
2
false
0
1
Try this: self.YourCheckboxObject.SetToolTip(wx.ToolTip("Paste your tooltip text here"))
1
0
0
And if so, how would one add a tooltip to a checkbox object? It appears that the control inherits from wxWindow which has tooltips, so can it be added to a wxCheckBox? Thanks!
Python - can you add a tooltip on a wx.CheckBox object?
0
0
0
438
48,780,634
2018-02-14T06:04:00.000
5
0
0
0
python-3.x
52,187,177
1
false
0
0
Use below code, this worked for me: pip3 install --upgrade oauth2client
1
2
0
I got this error in in Python3.6 ModuleNotFoundError: No module named 'oauth2client.client',i tried pip3.6 install --upgrade google-api-python-client,But I don't know how to fix Please tell me how to fix, Thanks
ModuleNotFoundError: No module named 'oauth2client.client'
0.761594
0
1
7,084
48,780,865
2018-02-14T06:20:00.000
0
0
1
0
python,callback,listener
48,781,129
1
false
0
0
You can achieve that using a good while loop that updates your listener function so long as your set criteria evaluate to true.
1
0
0
I am trying to write a python script which registers a function as a listener for certain events. However, by the time the listener callback function gets called, the python process has ended, since it is just a short script. What is the best way to keep the python process running so that the callback function can be run when it gets an update? In case it is relevant, I am trying to do this to get state updates from a drone running ardupilot. Whenever the drone's attitude changes, I want my callback function to be run. Thanks in advance for any help!
How to keep python running to respond to callback being called
0
0
0
388
48,784,009
2018-02-14T09:46:00.000
0
1
0
0
python-2.7,raspberry-pi,raspberry-pi3,barcode-scanner,raw-input
48,805,544
1
true
0
0
At last the problem is solved, in barcode scanner there is a mode where automatic enter key pressing is done.Just have to scan the enterkey barcode from the barcoe scanner manual.
1
0
0
I am trying to take input from usb barcode scanner in python (Raspberry pi). Barcode scanner works as keyboard so i need to press enter key after scanning .I dont want to press enterkey after scanning the data, the data (barcode) should be directly stored in to variable. How to do it?
How to use usb barcode scanner with python?
1.2
0
0
2,635
48,787,209
2018-02-14T12:25:00.000
0
0
1
0
python,anaconda,jupyter
48,991,612
4
false
0
0
I tried an old version of Anaconda exactly Anaconda3-4.0.1 and it works. Now I have my Anaconda Navigator and can launch jupyter notebook.
3
3
0
After installing Anaconda3 I tried to search in my start menu for Anaconda Navigator but it just doesn't show. All I get when typing Anaconda in my start menu is Anaconda Prompt, I have tried to launch jupyter notebook from the Anaconda Prompt but it says that "jupyter n'est pas reconnu en tant que commande interne ou externe", and it's the same for Anaconda Navigator. I want to add that I select the case that add Anaconda to the Path and I have installed miniconda too and I didn't have any error message while installing.
Jupyter notebook and Anaconda Navigator does not show after installing Anaconda
0
0
0
11,230
48,787,209
2018-02-14T12:25:00.000
0
0
1
0
python,anaconda,jupyter
57,987,418
4
false
0
0
This problem also happened to me under Ubuntu 16.04 LTS. I solved my problem by changing my BASHRC file (~/.bashrc). In this file, there is one line added by Anaconda installer: export PATH="/home/mustafa1/anaconda3/bin:$PATH" # commented out by conda initialize I just removed the # sign to make it work (of course I typed "source ~/.bashrc"), now I can see everything. I can use jupyter-notebook and anaconda-navigator. Thus, my problem was a PATH issue. I think it is important where you install your anaconda.
3
3
0
After installing Anaconda3 I tried to search in my start menu for Anaconda Navigator but it just doesn't show. All I get when typing Anaconda in my start menu is Anaconda Prompt, I have tried to launch jupyter notebook from the Anaconda Prompt but it says that "jupyter n'est pas reconnu en tant que commande interne ou externe", and it's the same for Anaconda Navigator. I want to add that I select the case that add Anaconda to the Path and I have installed miniconda too and I didn't have any error message while installing.
Jupyter notebook and Anaconda Navigator does not show after installing Anaconda
0
0
0
11,230
48,787,209
2018-02-14T12:25:00.000
2
0
1
0
python,anaconda,jupyter
59,334,555
4
false
0
0
i solved my problem by del .condarc file. And restart Navigator. That's ok.
3
3
0
After installing Anaconda3 I tried to search in my start menu for Anaconda Navigator but it just doesn't show. All I get when typing Anaconda in my start menu is Anaconda Prompt, I have tried to launch jupyter notebook from the Anaconda Prompt but it says that "jupyter n'est pas reconnu en tant que commande interne ou externe", and it's the same for Anaconda Navigator. I want to add that I select the case that add Anaconda to the Path and I have installed miniconda too and I didn't have any error message while installing.
Jupyter notebook and Anaconda Navigator does not show after installing Anaconda
0.099668
0
0
11,230
48,787,340
2018-02-14T12:33:00.000
0
0
0
0
python,tensorflow
48,787,920
1
false
0
0
It's to define the random seed. By this means, the weight values are always initialized by the same values. From Wiki: A random seed is a number (or vector) used to initialize a pseudo-random number generator.
1
0
1
What does seed=1 is doing in the following code: W3 = tf.get_variable("W3", [L3, L2], initializer = tf.contrib.layers.xavier_initializer(seed=1))
seed=1, TensorFlor- Xavier_initializer
0
0
0
160
48,787,973
2018-02-14T13:05:00.000
0
0
1
0
ipython,jupyter-notebook
67,408,827
3
false
0
0
On Safari, if the focus is on an incognito window, the notebook will open automatically in it.
1
3
0
On executing the command jupyter notebook, notebook opens on Mozilla Firefox. How to open notebook on incognito mode of Mozilla Firefox from command line?
Run jupyter notebook in incognito window
0
0
0
2,541
48,789,406
2018-02-14T14:20:00.000
0
0
0
0
python,optimization,scipy,least-squares
48,790,015
1
false
0
0
According to the help of scipy.optimize.least_squares, max_nfev is the number of function evaluations before the program exits : max_nfev : None or int, optional Maximum number of function evaluations before the termination. If None (default), the value is chosen automatically: Again, according to the help, there is no MaxIterations argument but you can define the tolerance in f (ftol) that is the function you want to minimize or x (xtol) the solution, before exiting the code. You can also use scipy.optimize.minimize(). In it, you can define a maxiter argument which will be in the options dictionary. If you do so, beware that the function you want to minimize must be your cost function, meaning that you will have to code your least square function. I hope it will be clear and useful to you
1
0
1
I am trying to use scipy.optimize.least_squares(fun= my_fun, jac=my_jac, max_nfev= 1000) with two callable functions: my_fun and my_jac both fuctions: my_fun and my_jac, use an external software to evaluate their value, this task is much time consuming, therefore I prefer to control the number of evaluations for both the trf method uses the my_fun function for evaluating if trust region is adequate and the my_jac function for determine both the cost function and the jacobian matrix There is an input parameter max_nfev. does this parameter count only for the fun evaluations? does it consider also the jac evaluations? moreover, in matlab there are two parameters for the lsqnonlin function, MaxIterations and MaxFunctionEvaluations. does it exist in scipy.optimize.least_squares? Thanks Alon
scipy.optimize.least_squares - limit number of jacobian evaluations
0
0
0
675
48,794,599
2018-02-14T19:14:00.000
1
0
1
1
python-3.x,pyinstaller,cx-freeze
48,795,211
1
false
0
0
3.6 is supported by both now. pip install pyinstaller should work If you are by chance using an Anaconda environment you will have to conda install pip before you pip install pyinstaller
1
0
0
I want to create a standalone executable, I am using python 3.6 on 64 bit os. And while trying to install cx freeze I got this msg no matching distrubtion found for pyinstaller Same error with cxfreeze
Is python 3.6 supported by pyinstaller or cxfreeze?
0.197375
0
0
768
48,795,392
2018-02-14T20:09:00.000
0
0
0
0
python,sockets,flask
48,795,531
1
false
1
0
If your HTTP client is written in python the simplest solution would be to use a higher level HTTP library like requests or urllib2. If you want to get the path mappings against your Flask app views you could print them by introspecting the app object and export them to json or some other format and use them in your client. In your sockets example just use GET /?arg=value HTTP/1.1\nHost : \r\n.
1
0
0
I am developing a desktop application that must send a specified url to a Flask application hosted online, and subsequently receive data from the same Flask app. 2 applications communicating back & forth. I am able to make GET and POST requests to this Flask app, but I am unaware of how to construct specific URL's which include arguments for the Flask app to receive via request.args.get() Thus far my ability hasn't been entirely erroneous. I can send a request GET / HTTP/1.1\nHost : \r\n which in turn receives something like b'HTTP/1.0 200 OK\r\n' Which is well and good, I got the encoding part down. Beyond this I am at a loss as the Flask view function needs to acquire an argument arg from a specific url - something like myFlaskApp.com/viewfunction?h=arg What would be an at least decent form if not a minimal / pragmatic way of practicing this kind of communication? I haven't much code to show for this one; I would like to leave any stratagem open for debate. I hope you can understand. Thank you! P.S. +<3 if you also show me how to receive and decode the Flask server's view function return value on my app client. Assumed to be an arbitrary string.
Python - Using socket to construct URL for external Flask server's view function
0
0
1
113
48,795,574
2018-02-14T20:22:00.000
0
0
0
0
python,arrays,numpy,coordinates,translation
49,395,441
1
true
0
0
My initial question was very misleading - my apologies for the confusion. I've since solved the problem by translating my local array (data cube) within a global array. To accomplish this, I needed to first plot my data within a larger array (such as a Mayavi scene, which I did). Then, within this scene, I moved my data (eg. using actors in Mayavi) to be centered at the global array's origin. Pretty simple actually - the point here being that my initial question was flawed; thank you all for the help and advice.
1
0
1
I have a 128-length (s) array cube with unique values held at each point inside. At the center of this cube is the meat of the data (representing an object), while on the inner borders of the cube, there are mostly zero values. I need to shift this entire array such that the meat of the data is actually at the origin (0,0) instead of at (s/2, s/2, s/2)... such that my new coordinate origin is actually at (-s/2, -s/2, -s/2). What is the best way to tackle this problem? Edit: Sorry for the lack of data - I'm using a .mrc file. This is all to circumvent a plotting issue in mayaVI using its contour3d method. Perhaps I should be finding a way to translate my plotted object (with mayaVI) instead of translating my raw data array? But aren't these two technically the same thing?
Translating entire coordinates of array to new origin
1.2
0
0
297
48,795,950
2018-02-14T20:50:00.000
3
0
0
0
python,tensorflow,object-detection
54,771,885
2
false
0
0
You don't mention which type of model you are training - if like me you were using the default model from the TensorFlow Object Detection API example (Faster-RCNN-Inception-V2) then num_clones should equal the batch_size. I was using a GPU however, but when I went from one clone to two, I saw a similar error and setting batch_size: 2 in the training config file was the solution.
1
8
1
I wanted to train on multiple CPU so i run this command C:\Users\solution\Desktop\Tensorflow\research>python object_detection/train.py --logtostderr --pipeline_config_path=C:\Users\solution\Desktop\Tensorflow\myFolder\power_drink.config --train_dir=C:\Users\solution\Desktop\Tensorflow\research\object_detection\train --num_clones=2 --clone_on_cpu=True and i got the following error Traceback (most recent call last): File "object_detection/train.py", line 169, in tf.app.run() File "C:\Users\solution\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\platform\app.py", line 124, in run _sys.exit(main(argv)) File "object_detection/train.py", line 165, in main worker_job_name, is_chief, FLAGS.train_dir) File "C:\Users\solution\Desktop\Tensorflow\research\object_detection\trainer.py", line 246, in train clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue]) File "C:\Users\solution\Desktop\Tensorflow\research\slim\deployment\model_deploy.py", line 193, in create_clones outputs = model_fn(*args, **kwargs) File "C:\Users\solution\Desktop\Tensorflow\research\object_detection\trainer.py", line 158, in _create_losses train_config.merge_multiple_label_boxes) ValueError: not enough values to unpack (expected 7, got 0) If i set num_clones to 1 or omitted it, it works normally. I also tries setting --ps_tasks=1 which doesn't help any advice would be appreciated
[Tensorflow][Object detection] ValueError when try to train with --num_clones=2
0.291313
0
0
2,859
48,806,594
2018-02-15T11:51:00.000
0
0
0
1
python,pyqt,maya
48,861,511
2
true
0
0
Solution: Calling the .exe seems to set all needed PYTHONPATH options needed for Maya to run. This is not the case when calling it from Python. Manually adding it to the PYTHONPATH before executing solves the issue.
1
0
0
so I am trying to launch Maya using Python 2.7 as subprocess. My goal for now is to fire it up and hand over some variables. Launching is working, but it does throw errors I do not have when launching via a bat file. The process is currently Windows 10 only and I am using Maya 2018 latest update. Seems like PyQT is not getting loaded this way: Autodesk/Maya2018/scripts/startup/initMainWindow.mel line 178: ImportError: file ######\Maya2018\Python\lib\site-packages\maya\app\general\mayaMixin.py line 35: DLL load failed: The specified module could not be found., No module named PyQt4.QtCore // I tried launching using os.system as well as subprocess.Popen both resulting in the same error. My current launch command is a simple call to the exe with no additional parameters. Do I maybe have to source PyQt myself if run via python call? command used is: subprocess.Popen([r"C:/Program Files/Autodesk/Maya2018/bin/maya.exe"]) Thanks Thomas
Maya python launch
1.2
0
0
732
48,806,894
2018-02-15T12:06:00.000
0
0
0
0
python
49,682,887
2
false
1
0
I tried robot.step() and it works thank you. I use small increments of time so that the code is not continuously blocing and there is time for my sensors to do the reading.
1
0
0
I am using webots for my project at university. I want my robot to do a specific action for a certain amount of time, but I cannot find a way to do it without blocking the code and the sensors and consequently the whole simulation. I tried both the commands robot.step() and time.sleep() but they both blocck the code and by the time the action is finished the robot does not do anything else even when it is normally supposed to. Specificaly I want the robot to go backwards for a certain ammount of time if the sesnsors at the front and the sides read below a specific distance. Any ideas on how to do it without blocking the code? because if for example I use one of the above commands and there is an object behind the robot the beck sensor will not work because it is blocked and the robot will hit on the object. Thank you.
Webots programming with Python - blocking code
0
0
0
529
48,812,910
2018-02-15T17:20:00.000
4
0
0
0
python,selenium
48,813,013
1
true
0
0
The path 'org.openqa.selenium.support.ui.Select' is a Java descriptor. In Python, make sure you have the Python Selenium module installed with pip install selenium, and then import it with import selenium. For the Select function specifically, you can import that with the following from selenium.webdriver.support.ui import Select Then you'll be able to use it like this: select = Select(b.find_element_by_id(....))
1
1
0
I am trying to use the Select function in Selenium for Python 3 to help with navigating through the drop down boxes. However, when I try to import org.openqa.selenium.support.ui.Select I get an error message: "No module named 'org'" Would appreciate any help on this. I saw there was a similar question posted a few weeks ago but the link to that question is now broken. Thanks!
Selenium / Python - No module named 'org'
1.2
0
1
5,709
48,818,101
2018-02-15T23:47:00.000
1
0
0
0
python,django,python-3.x,django-1.10
48,818,718
1
false
1
0
post_save won't know anything about any form that might have caused the model change. If you want to access that checkbox value you need to do it in the form class itself. I would probably override the clean() method of the form, and check for the checkbox value in cleaned_data['checkbox_field'] there, and then do whatever you need to with it.
1
0
0
Is there a way I can access the form that caused a post_save? The use case is that I have a field (a checkbox) that isn't attached to a particular model, but it's an extra field in the form itself. I want to know whether the field was checked or unchecked when the form got saved and the model stored, and imho the post_save signal is a good place to put the logic that should process that extra field. I'm also open to suggestions where else I could put that piece of code.
Access the form that caused a post_save [Django]
0.197375
0
0
148
48,822,796
2018-02-16T08:30:00.000
0
0
0
0
python,image-processing,tensorflow
48,822,919
1
true
0
0
Single label classification is not something Neural Networks can do "off-the-shelf". How do you train it ? With only data relevant to your target domain ? Your model will only learn to output one. You have two strategies: you use the same strategy as in the "HotDog or Not HotDog app", you put the whole imagenet in two different folders, one with the class you want, the other one containing everything else. You use the convnet as feature extractor and then use a second model like a One-Class SVM. You have to understand that doing one class classification is not a simple and direct problem like binary classification could be.
1
0
1
I am using Tensorflow retraining model for Image Classification. I am doing single label classification. I want to set a threshold for correct classification. In other words, if the highest probability is less than a given threshold, I can say that the image is "unknown" i.e. if np.max(results) < 0.5 -> set label as "unknown". So, is there any industry standard to set this threshold. I can set a random value say 60%, but is there any literature to back this threshold ? Any links or references will be very helpful. Thanks a lot.
Probability for correct Image Classification in Tensorflow
1.2
0
0
288
48,824,675
2018-02-16T10:24:00.000
2
0
0
0
python-2.7,opencv,hpc,torque,environment-modules
48,829,716
1
true
0
0
The Python module uses a system library (namely libSM.so.6 : library support for the freedesktop.org version of X) that is present on the head node, but not on the compute nodes (which is not very surprising) You can either: ask the administrators to have that library installed systemwide on the compute nodes through the package manager ; or locate the file on the head node (probably in /usr/lib or /usr/lib64 or siblings), and copy it in /home/trig/privatemodules/venv_python275/lib/python2.7/site-packages/cv2/, where Python should find it. If Python still does not find it, run export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/trig/privatemodules/venv_python275/lib/python2.7/site-packages/cv2/ in your Torque script after you load the module. or you can search for the source for libSM and compile it in your home directory
1
2
1
I have a task to train neural networks using tensorflow and opencv-python on HPC nodes via Torque. I have made privatemodule with python virtualenv and installed tensorflow and opencv-python modules in it. In the node I can load my python module. But when I try to run training script I get following error: Traceback (most recent call last): File "tensornetwork/train_user_ind_single_subj2.py", line 16, in <module> from reader_user_ind_single_subj import MyData File "/home/trig/tensornetwork/reader_user_ind_single_subj.py", line 10, in <module> import cv2 File "/home/trig/privatemodules/venv_python275/lib/python2.7/site-packages/cv2/__init__.py", line 4, in <module> from .cv2 import * ImportError: libSM.so.6: cannot open shared object file: No such file or directory The training script can run on head node, but cant on compute node. Can you suggest how to modify my module or add a new module to make training run on compute node using Torque.
create environment module to work with opencv-python on hpc nodes
1.2
0
0
388
48,825,025
2018-02-16T10:43:00.000
1
1
1
0
python,centos,centos7,yum
48,830,296
2
true
0
0
I fixed this issue by installing CentOS on a VM and then scp the python2.7 directory to the server.
2
1
0
I accidentally removed my Python's site-packages which means I got no any modules. Unfortunately, I noticed too late that the Yum uses a module named yum which is installed in the Python's site-packages where is located in /usr/local/lib/python2.7/site-packages. I was trying to reinstall yum but no yum module was installed. Hope to find an answer, thanks!
Centos 7 - No module named yum - Accidentally removed Python site-packages
1.2
0
0
1,086
48,825,025
2018-02-16T10:43:00.000
0
1
1
0
python,centos,centos7,yum
48,825,047
2
false
0
0
Try rpm -V yum which checks for issues with yum
2
1
0
I accidentally removed my Python's site-packages which means I got no any modules. Unfortunately, I noticed too late that the Yum uses a module named yum which is installed in the Python's site-packages where is located in /usr/local/lib/python2.7/site-packages. I was trying to reinstall yum but no yum module was installed. Hope to find an answer, thanks!
Centos 7 - No module named yum - Accidentally removed Python site-packages
0
0
0
1,086
48,825,031
2018-02-16T10:43:00.000
0
0
1
0
python,pycharm,jetbrains-ide
50,845,594
3
false
0
0
With VIM emulation on: Use scrollbar to scroll to the end of what you want to copy. (click/drag bar) Click and drag up to highlight a few lines. Use scrollbar again to scroll to the start of what you want to copy. Shift/click at the start of the text you want to copy. (should now be highlighted) Right click and select copy. This isn't as quick as Ctrl-A, but quicker than turning VIM emulation off/on. This worked for me in the Python Console, windows 10, PyCharm Community 2018.1.2
3
1
0
Is there a sane way to copy log text from the PyCharm console, instead of selecting it slowly with the mouse (espacially when there's a bundance of text there)? There seem to be no "Select All" from the debug console. Is it on porpose? Is there any way to copy (all of) the text from the console sanely? I do hope the guys and girls at JetBrain do understand that Notepad++ is wayyyy more easy when looking at/analysing logs?
Is there a simple way to copy text from the debug console of PyCharm?
0
0
0
1,920
48,825,031
2018-02-16T10:43:00.000
0
0
1
0
python,pycharm,jetbrains-ide
48,851,083
3
false
0
0
I just click into the debug console window and press Ctrl-A (standard Windows shortcut for select all). Followed by Ctrl-C to copy and then Ctrl-V to paste it into another app (notepad++ or something).
3
1
0
Is there a sane way to copy log text from the PyCharm console, instead of selecting it slowly with the mouse (espacially when there's a bundance of text there)? There seem to be no "Select All" from the debug console. Is it on porpose? Is there any way to copy (all of) the text from the console sanely? I do hope the guys and girls at JetBrain do understand that Notepad++ is wayyyy more easy when looking at/analysing logs?
Is there a simple way to copy text from the debug console of PyCharm?
0
0
0
1,920
48,825,031
2018-02-16T10:43:00.000
0
0
1
0
python,pycharm,jetbrains-ide
50,133,766
3
false
0
0
if you have vim emulator on it won't work. Turn it off by going to Tools and deselecting vim emulator. After that you can use Ctrl+A to select the text in the console. If that doesn't work you might have a mapping over the Ctrl+A shortcut, it's worth checking that out.
3
1
0
Is there a sane way to copy log text from the PyCharm console, instead of selecting it slowly with the mouse (espacially when there's a bundance of text there)? There seem to be no "Select All" from the debug console. Is it on porpose? Is there any way to copy (all of) the text from the console sanely? I do hope the guys and girls at JetBrain do understand that Notepad++ is wayyyy more easy when looking at/analysing logs?
Is there a simple way to copy text from the debug console of PyCharm?
0
0
0
1,920
48,825,248
2018-02-16T10:55:00.000
0
0
0
0
python,cluster-analysis,customer
48,833,452
1
false
0
0
Avoid comparing Silhouettes of different projections or scalings. Internal measures tend to be too sensitive. Do not use tSNE for clustering (Google for the discussion on stats.SE, feel free to edit the link into this answer). It will cause false separation and false adjacency; it is a visualization technique. PCA will scale down high variance axes, and scale up low variance directions. It is to be expected that this overall decreases the quality if the main axis is what you are interested in (and it is expected to help if it is not). But if PCA visualization shows only one big blob, then a Silhouette of 0.7 should not be possible. For such a high silhouette, the clusters should be separable in the PCA view.
1
2
1
I work at an ecommerce company and I'm responsible for clustering our customers based on their transactional behavior. I've never worked with clustering before, so I'm having a bit of a rough time. 1st) I've gathered data on customers and I've chosen 12 variables that specify very nicely how these customers behave. Each line of the dataset represents 1 user, where the columns are the 12 features I've chosen. 2nd) I've removed some outliers and built a correlation matrix in order to check of redundant variables. Turns out some of them are highly correlated ( > 0.8 correlation) 3rd) I used sklearn's RobustScaler on all 12 variables in order to make sure the variable's variability doesn't change much (StandardScaler did a poor job with my silhouette) 4th) I ran KMeans on the dataset and got a very good result for 2 clusters (silhouette of >70%) 5th) I tried doing a PCA after scaling / before clustering to reduce my dimension from 12 to 2 and, to my surprise, my silhouette started going to 30~40% and, when I plot the datapoints, it's just a big mass at the center of the graph. My question is: 1) What's the difference between RobustScaler and StandardScaler on sklearn? When should I use each? 2) Should I do : Raw Data -> Cleaned Data -> Normalization -> PCA/TSNE -> Clustering ? Or Should PCA come before normalization? 3) Is a 12 -> 2 dimension reduction through PCA too extreme? That might be causing the horrible silhouette score. Thank you very much!
Clustering Customers with Python (sklearn)
0
0
0
214
48,831,091
2018-02-16T16:33:00.000
5
0
1
0
python,kernel,ipython,spyder
48,834,580
3
false
0
0
(Spyder maintainer here) This bug was introduced by the latest update to Pyzmq (17.0.0). The easiest way to solve this is to downgrade to Pyzmq 16.0.4 until a new version of Ipykernel is released (most probably 4.8.2).
2
3
0
Please bear with me, as I'm new to python and Stackoverflow. When starting Spyder (v3.2.6), my IPython console gets stuck on "Connecting to kernel..." All the solutions to similar inquiries that I can find seem to involve Anaconda, which I don't have installed (to my knowledge), and would prefer not to unless its absolutely necessary. I've tried opening new IPython consoles, restarting the kernel, restarting Spyder, resetting Spyder to factor default settings, but to no avail. Any help is appreciated!
IPython console stuck on "Connecting to kernel..." (Spyder v3.2.6, Py 3.6, Windows 10, 64x)
0.321513
0
0
5,284
48,831,091
2018-02-16T16:33:00.000
0
0
1
0
python,kernel,ipython,spyder
50,948,931
3
false
0
0
Go to "Environments" in the Anaconda Navigator. Search for pyzmq Click on the green tick box and select version 16.0.3 Click on "Apply" That's it, it should work after that. I had the same issue and it got resolved after doing this.
2
3
0
Please bear with me, as I'm new to python and Stackoverflow. When starting Spyder (v3.2.6), my IPython console gets stuck on "Connecting to kernel..." All the solutions to similar inquiries that I can find seem to involve Anaconda, which I don't have installed (to my knowledge), and would prefer not to unless its absolutely necessary. I've tried opening new IPython consoles, restarting the kernel, restarting Spyder, resetting Spyder to factor default settings, but to no avail. Any help is appreciated!
IPython console stuck on "Connecting to kernel..." (Spyder v3.2.6, Py 3.6, Windows 10, 64x)
0
0
0
5,284
48,832,344
2018-02-16T17:51:00.000
0
0
1
0
python,uninstallation
50,609,180
1
true
0
0
This depends on the OS and how Python was installed. For windows, look under %USERPROFILE%\AppData\Local\Programs\Python - or just run the installer again, it should have an option to fix or remove the current install.
1
0
0
I have accidentally removed several parts of python and now am trying to start again... The installer says that 57 files are still on my PC and I cannot find them. Does anyone know how to get a copy of the uninstaller? As it should find the remaining files.
How to remove the remains of python 3.7?
1.2
0
0
3,423
48,833,111
2018-02-16T18:48:00.000
0
0
0
0
python,django,python-requests
49,059,369
1
true
1
0
I know what the problem was. My application when it was being deployed was single threaded, not multithreaded. I changed my worker number and that fixed everything.
1
0
0
I'm doing a requests.get(url='url', verify=False), from my django application hosted on an Ubuntu server from AWS, to a url that has a Django Rest Framework. There are no permissions or authentication on the DRF, because I'm the one that made it. I've added headers such as headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}, but wasn't able to get any content. BUT when I do run ./manage.py shell and run the exact same command, I get the output that I need! EDIT 1: So I've started using subprocess.get_output("curl <url> --insecure", shell=True) and it works, but I know this is not a very "nice" way to do things.
Why do I not get any content with python requests get, but still a 200 response?
1.2
0
0
48
48,835,136
2018-02-16T21:30:00.000
0
0
1
0
python,pygame,sprite
49,012,486
1
false
0
1
I just realised that you can run pygame.image.load() again to change the image.
1
1
0
I'm working on a program where I have a group() with sprites in them. I start by adding each sprite to the group and passing an image for each sprite. Is it possible loop through each sprite in the group and changing the image of a sprite if a certain criteria is met (if statement), or would you need to remove the whole group and create a new one?
Change sprite's image in pygame
0
0
0
2,080
48,835,956
2018-02-16T22:49:00.000
1
0
1
0
python
48,836,323
1
false
0
0
im 99.9999% sure that pyinstaller already has hooks for numpy just add import numpy at the top of myscript.py and run pyinstaller --onefile myscript.py && .\dist\myscript.exe but to answer your question look in the site-packages folder of you python folder (type which python to find your python version) or which pyinstaller to see your pyinstaller location (it should be the same as which python but in the scripts folder) it used to be located at C:\PythonX.Y\Lib\ in 3.X it is usually located in your AppData folder
1
1
0
I have been trying to convert a .py to .exe using pyinstaller and as you can see from the title.. the .exe file does not execute properly.. it says it does not find the module (numpy). So I did some research and I discovered that pyinstaller can have difficulties in finding modules.. Pyinstaller website: pyi-makespec --paths=/path/to/thisdir \ --paths=/path/to/otherdir myscript.py The code above would help pyinstaller finding them. My questions are: In what directory are my libraries? (such as numpy, pandas, etc) Would this be a different code? or would I add this into my code? or would this go somewhere in this line 'pyinstaller myscript.py' ??? Thanks
Missing modules Pyinstaller
0.197375
0
0
2,771
48,836,650
2018-02-17T00:14:00.000
0
0
0
0
python-3.x,gtk3
65,550,746
2
false
0
1
A lot later I had the same problem. IconView at least now supports that by default (if Ctrl is held). Note that your application must have keyboard focus.
1
1
0
I've already added the code for drag and drop to the iconview widget, but I haven't found any method for dragging two or more items: every time an item is selected, the previous selection is cleaned.
Python3-Gtk3. Iconview. Drag multiple items
0
0
0
109
48,837,086
2018-02-17T01:35:00.000
0
0
0
0
python,automation,pyautogui
70,868,419
1
false
0
1
Use confidence, default value is 0.999. Reason is pyscreeze is actually used by pyautogui which has the confidence value which most likely represents a percentage from 0% - 100% for a similarity match. Looking through the code with my amateur eyes reveals that OpenCV and NumPy are required for confidence to work otherwise a different function would be used that doesn't have the confidence value. for example: pyautogui.locateCenterOnScreen('foo.png', confidence=0.5) will set your confidence to 0.5, which means 50%.
1
3
0
I use pyautogui to search an image on Desktop window for click automation. pyautogui.locateOnScreen(image) If the image is captured on the same screen as screenshot, it can be matched. However, if the image is a bit different, it cannot. e.g. captured as the low resolution image. Can I set some likelihood in pyautogui or use other library?
How to search not same images but similar images by pyautogui
0
0
0
330
48,840,025
2018-02-17T09:52:00.000
3
0
0
0
python,tensorflow,heroku,keras,deep-learning
61,806,979
7
false
1
0
A lot of these answers are great for reducing slug size but if anyone still has problems with deploying a deep learning model to heroku it is important to note that for whatever reason tensorflow 2.0 is ~500MB whereas earlier versions are much smaller. Using an earlier version of tensorflow can greatly reduce your slug size.
1
10
1
I have developed a rest API using Flask to expose a Python Keras Deep Learning model (CNN for text classification). I have a very simple script that loads the model into memory and outputs class probabilities for a given text input. The API works perfectly locally. However, when I git push heroku master, I get Compiled slug size: 588.2M is too large (max is 500M). The model is 83MB in size, which is quite small for a Deep Learning model. Notable dependencies include Keras and its tensorflow backend. I know that you can use GBs of RAM and disk space on Heroku. But the bottleneck seems to be the slug size. Is there a way to circumvent this? Or is Heroku just not the right tool for deploying Deep Learning models?
Heroku: deploying Deep Learning model
0.085505
0
0
7,474
48,840,282
2018-02-17T10:21:00.000
0
0
1
0
python,python-3.5,python-3.6
48,840,524
3
false
0
0
Cannot comment since I don't the rep. If your default python is 3.5 when you check python --version, the way to go would be to find the location of the python executable for the desired version (here 3.6). cd to that folder and then run the command given by Mike.
1
0
1
I have both Python 3.5 and Python 3.6 on my laptop. I am using Ubuntu 16.04. I used pip3 to install numpy. It is working with Python3.5 but not with Python3.6. Please help.
numpy got installed in Python3.5 but not in Python3.6
0
0
0
2,139
48,841,270
2018-02-17T12:29:00.000
6
0
1
0
python-3.x,tensorflow
49,082,525
1
false
0
0
I've just fixed the same problem. Reason: space in names is not accepted. Simply modify 'the context text' to 'the-context-text' will fix your problem.
1
4
0
File "/Users/Mohannad/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2908, in name_scope raise ValueError("'%s' is not a valid scope name" % name) ValueError: 'the context text' is not a valid scope name AnyBody know what does this mean or how to solve it ?
Valid scope name
1
0
0
2,448
48,842,812
2018-02-17T15:21:00.000
0
0
0
1
python,django,pythonanywhere
48,919,217
2
true
0
0
The problem was within my settings.py - I wasn't pointing the project to the correct database settings - username and password. I changed the settings to reflect those of my pythonanywhere details, and then I could operate manage.py properly from there.
2
0
0
I'm using PythonAnywhere with Postgresql, and have run into several problems. When I try to do anything, such as python manage.py makemigrations, I get the following error : sudo: unknown user: root sudo: unable to initialize policy plugin Also, I tried to use postgres -V, but I get command not found, and yet I can't use sudo to install it. Finally, I'm also not sure what my UNIX password is, but all my permissions are denied to me. Strangely, I've noticed the creation of a dead.letter file, which contains: giles-liveconsole1 : Feb 17 09:25:05 : X : user NOT in sudoers ; TTY=unknown ; PWD=/home/X/X/X ; USER=X ; COMMAND=/bin/bash giles-liveconsole2 : Feb 17 11:43:08 : X : user NOT in sudoers ; TTY=unknown ; PWD=/etc ; USER=#0 ; COMMAND=/usr/bin/vi /etc/passwd giles-liveconsole2 : Feb 17 11:45:51 : X : user NOT in sudoers ; TTY=unknown ; PWD=/etc ; USER=#0 ; COMMAND=/usr/bin/vi /etc/passwd
sudo: unknown user: root via PythonAnywhere
1.2
0
0
1,874
48,842,812
2018-02-17T15:21:00.000
4
0
0
1
python,django,pythonanywhere
48,869,817
2
false
0
0
In general, makemigrations should not be using sudo. If it is, then there's something in your django settings that's making it do that. You don't need to run postgres - it's already running. See the Databases tab in your account for the connection details.
2
0
0
I'm using PythonAnywhere with Postgresql, and have run into several problems. When I try to do anything, such as python manage.py makemigrations, I get the following error : sudo: unknown user: root sudo: unable to initialize policy plugin Also, I tried to use postgres -V, but I get command not found, and yet I can't use sudo to install it. Finally, I'm also not sure what my UNIX password is, but all my permissions are denied to me. Strangely, I've noticed the creation of a dead.letter file, which contains: giles-liveconsole1 : Feb 17 09:25:05 : X : user NOT in sudoers ; TTY=unknown ; PWD=/home/X/X/X ; USER=X ; COMMAND=/bin/bash giles-liveconsole2 : Feb 17 11:43:08 : X : user NOT in sudoers ; TTY=unknown ; PWD=/etc ; USER=#0 ; COMMAND=/usr/bin/vi /etc/passwd giles-liveconsole2 : Feb 17 11:45:51 : X : user NOT in sudoers ; TTY=unknown ; PWD=/etc ; USER=#0 ; COMMAND=/usr/bin/vi /etc/passwd
sudo: unknown user: root via PythonAnywhere
0.379949
0
0
1,874
48,845,754
2018-02-17T20:41:00.000
0
0
1
0
python,pyopencl
48,855,484
1
false
0
0
After looking more, here is the answer. In the case your computer has a platform or multiple, there should be located in Ubuntu here: /etc/OpenCL/vendors Then, you only have to copy/paste the icd files, in my case in the following path where I installed pyOpenCl. /home/[username]/anaconda2/etc/OpenCL/vendors And finally RESTART, otherwise it won't work(at least in my case)
1
0
0
I am trying to use pyOpenCL in an IPython notebook, Ubuntu 16.4, Nvidia card. But, I am getting the error: clGetPlatformIDs failed: unknown error -1001 However, if I run in terminal "clinfo" I get the 3 platforms installed. Where does PyOpenCL try to find the platforms? I can create a link in the folder that consults.
where or how does pyOpenCL looks for the clplatforms?
0
0
0
87
48,847,375
2018-02-18T00:45:00.000
0
0
1
0
python,scrapy,virtualenv
48,876,629
1
false
1
0
from user stanac, async and await are keywords in python 3.7. so even after I was able to install scrapy successfully, i couldn't run bench or shell (testing install) without throwing errors. installed a virtualenv targeting 2.7 and installed and ran bench/shell with no issues.
1
1
0
Good day! I'm pretty new to python/scrapy as in never tried it. I've been able to work through a lot of the issues I've come across but I'm stuck trying to resolve "Running setup.py bdist_wheel for lxml ... error" I can't seem to get past it. I've tried to "pip3 install lxml" but that fails. I could past the whole terminal text but that is almost 500 lines. What would be helpful to provide enough info to resolve. OS 10.11.6 Any help would be great! I'm anxious to try out scrapy! Nick Maybe I should enter my response here! :) I have been able to install other packages. I've started over a couple of times so I'm working in a "fresh" virtualenv
Scrapy install in virtualenv - cant resolve issue. looks related to lxml?
0
0
0
188
48,848,335
2018-02-18T04:05:00.000
2
0
1
0
python,tuples,limits
48,848,431
2
false
0
0
I believe tuples and lists are limited by the size of the machine's virtual memory, unless you're on a 32 bit system in which case you're limited by the small word size. Also, lists are dynamically resized by... I believe about 12% each time they grow too small, so there's a little overhead there as well. If you're concerned you're going to run out of virtual memory, it might be a good idea to write to a file or files instead.
1
2
0
I am building a web scraper that stores data retrieved from four different websites into a tuple array. I later iterate through the tuple and save the entire lot as both CSV and Excel. Are tuple arrays or arrays in general, limited to the processor's RAM/disc-space? Thanks
What is the maximum tuple array size in Python 3?
0.197375
0
0
758
48,848,755
2018-02-18T05:26:00.000
2
0
1
1
python,anaconda
50,187,215
2
false
0
0
I have encountered a similar problem, what I did to solve the problem was I removed all of those libraries which causing errors, by editing the .yml file. Why I did this? because some of packages are actually just support packages to others, which in my case the version of those packages didn't available any longer or didn't fit, but don't worry, once you update the .yml and re-run the command again, which in your case is "conda env create -f tfdl_env.yml", those failed package will be installed accordingly, triggered by the main packages that requiring those packages and find the most suited package. Hope it helps.
2
3
0
i have been trying to create an conda environment but i keep getting error saying resolve package not found however all the packages are already install.Even when i try to install any of the packages seperately it says package already installed.here is the error what i get conda env create -f tfdl_env.yml Solving environment: failed ResolvePackageNotFound: win_unicode_console==0.5=py35_0 tk==8.5.18=vc14_0 qt==5.6.2=vc14_6 vs2015_runtime==14.0.25420=0 libpng==1.6.30=vc14_1 openssl==1.0.2l=vc14_0 wincertstore==0.2=py35_0 jpeg==9b=vc14_0 six==1.10.0=py35_1 zlib==1.2.11=vc14_0 icu==57.1=vc14_0
Not able to create conda environment in macboook air(Resolvepackagenotfound)
0.197375
0
0
1,127
48,848,755
2018-02-18T05:26:00.000
0
0
1
1
python,anaconda
52,279,615
2
false
0
0
Delete the mentioned packages by editing the .yml file and run the command for environment creation. It works fine. Hope this helps.
2
3
0
i have been trying to create an conda environment but i keep getting error saying resolve package not found however all the packages are already install.Even when i try to install any of the packages seperately it says package already installed.here is the error what i get conda env create -f tfdl_env.yml Solving environment: failed ResolvePackageNotFound: win_unicode_console==0.5=py35_0 tk==8.5.18=vc14_0 qt==5.6.2=vc14_6 vs2015_runtime==14.0.25420=0 libpng==1.6.30=vc14_1 openssl==1.0.2l=vc14_0 wincertstore==0.2=py35_0 jpeg==9b=vc14_0 six==1.10.0=py35_1 zlib==1.2.11=vc14_0 icu==57.1=vc14_0
Not able to create conda environment in macboook air(Resolvepackagenotfound)
0
0
0
1,127
48,852,421
2018-02-18T13:52:00.000
0
0
0
0
java,python,server,client
48,852,566
1
false
1
0
On the laptop running the server: The client can access using localhost:<port> or 0.0.0.0:<port> Connecting from another laptop (same network): You have to connect to: <pc-server-local-ip>:<port> To get <pc-server-local-ip, using the laptop running your server: - Windows : type ipconfig in console, value next to IPV4 - Linux / Mac : type ifconfig in console, value next to inet
1
0
0
I made server using python on laptop. And I made client using Java on samelaptop. They were connected, and They were communicated. But when I made client using Java on another laptop, client didn't find server What is wrong?? and What could I do??
python server and java client(another PC) Error
0
0
1
27
48,854,351
2018-02-18T17:09:00.000
1
0
0
0
python,flask,slack,slack-api
48,854,371
1
false
1
0
You could, but that is not a good way of doing it. Your operating system almost certainly has this functionality built-in; on unix-like systems for example you would use cron.
1
0
0
I was wondering if it was possible to have a web service running on Flask to execute a function at a certain time every day. I'm making a Slack bot with Flask and Python, and I want the bot to send out a reminder every day at 6:00 to a channel. Is it possible to keep track of the current time continuously, and only perform an action once the current time equals a set time?
How to execute a function at certain time and day with Flask and Python?
0.197375
0
0
479
48,854,507
2018-02-18T17:24:00.000
2
0
1
0
python-3.x
48,854,642
1
true
0
0
It marks an optional parameter. You can call mystring.center(42) as well as mystring.center(42, ' '). The function’s documentation should hint how the behavior would differ.
1
0
0
Looking through documentation I sometimes see a list of parameters containing par_n[,par x]. For example: str.center(width[, fillchar]) What does this mean? Any attempts of looking for answers using google and in stackoverflow have not been successful.
What does [,some_par] mean in python?
1.2
0
0
29
48,855,204
2018-02-18T18:34:00.000
1
0
0
0
python,django,session
48,855,694
2
true
1
0
AFAIK you can't. You need to implement some kind of time-limited reservation. Temporarily book a timeslot for user on his second booking step - right after he picks his date. Then if he finalizes the process make the book permanent or delete reservation (and make the date available again) after few minutes. You need to do this in this way, otherwise you will end up with lots of dead dates created by users which didn't finish the booking process.
1
0
0
Is it possible in django to iterate all current sessions? I want to implement a calendar where it is impossible to book a timeslot that someone else is booking. I Keep a list of timeslots id's in the session before the user proceeds to checkout.
Iterate over all current sessions in django
1.2
0
0
315
48,856,497
2018-02-18T20:57:00.000
0
0
0
0
python,numpy
64,853,941
2
false
0
0
These are essentially the beta and the alpha values for the given data. Where beta necessarily demonstrates the degree of volatility or the slope
1
3
1
I went through the docs but I'm not able to interpret correctly IN my code, I wanted to find a line that goes through 2 points(x1,y1), (x2,y2), so I've used np.polyfit((x1,x2),(y1,y2),1) since its a 1 degree polynomial(a straight line) It returns me [ -1.04 727.2 ] Though my code (which is a much larger file) runs properly, and does what it is intended to do - i want to understand what this is returning I'm assuming polyfit returns a line(curved, straight, whatever) that satisfies(goes through) the points given to it, so how can a line be represented with 2 points which it is returning?
What does np.polyfit do and return?
0
0
0
6,890
48,859,452
2018-02-19T04:15:00.000
1
0
1
0
python,pycharm,cython,mypy
61,969,504
1
false
0
0
The .pyx file is regarded as a text file by default in Pycharm. However, you can change the setting. Go to File->Settings->Editor->File Types. In recognized file types, find "Text", delete the entry ".pyx" in the registered patterns. Then, find "Python" in the recognized file types, add an entry ".pyx" in the registered patterns.
1
9
0
It seems python type checker in PyCharm automatically works for .py files.. but not for .pyx files. Is there any way to enable type checker for .pyx files in PyCharm? Also, is there any way to use mypy with cython files (.pyx files)?
How to enable PyCharm Type Checker feature for cython(.pyx) file?
0.197375
0
0
344
48,859,707
2018-02-19T04:53:00.000
0
1
0
1
python,raspberry-pi,usb-drive
48,959,545
1
false
0
0
Finally got it figured out. After additional trial and error, I figured out that there is something wrong with the wired network port on the Pi. Everything works fine if I swap out my Pi3 with my Pi1, and the Pi3 works if I use the WiFi (been using wired port for speed).
1
0
0
I have a Raspberry Pi connected to a Seagate Backup Plus 3T external hard drive. I've written a Python script to make backup copies of files from my Windows File Server onto the external hard drive. Everything SEEMS to be running fine. But when I copy a file from the external hard drive back to the Windows File Server, I have random bit errors... specifically the high order bit of random bytes will be a '1' in the copied file (i.e. 0x31 ==> 0xB1, 0x2B ==> 0xAB, 0x71 ==> 0xF1). In a 9MB .MOV file, I've got 13 of these random bit errors. In the python application, I've used both shutil.copy2 function to copy the files, and I've written a subroutine to open files for binary read/write and copy 1MB at a time. When I connected the external hard drive to a Windows 10 machine and tried to copy files from File Server to external hard drive and back, I didn't get any errors.
Raspberry Pi: File copy to Seagate Backup Plus Bit Errors
0
0
0
74
48,860,824
2018-02-19T06:47:00.000
1
0
0
0
python,regression,jupyter-notebook,decision-tree
48,877,243
2
false
0
0
The whole point of using machine learning is to let it decide on its own how much weight should be given to which predictor based on its importance in predicting the label correctly. It just doesn't makes any sense trying to do this on your own and then also use machine learning.
2
0
1
I have a data set with continuous label ranging from one to five with nine different features. So I wanted to give weight to each feature manually because some of the features have very less dependency on the label so I wanted to give more weight to those features which have more dependency on the label. How can I manually give weight to each feature? Will it be possible to give weight like this? I went through different documentations I can only find how to give weight to the label. What I only find is eliminating features ranking features etc. But I wanted to give weight to each feature manually also I wanted to tune these weights (Sometimes the feature weight will be different for different scenario so I wanted to tune the weight according to that) Will it be possible ?
how to manually give weight to features using python in machine learning
0.099668
0
0
839
48,860,824
2018-02-19T06:47:00.000
0
0
0
0
python,regression,jupyter-notebook,decision-tree
48,877,568
2
false
0
0
Don't assign weights manually, let the model learn the weights itself. It will automatically decide which features are more important.
2
0
1
I have a data set with continuous label ranging from one to five with nine different features. So I wanted to give weight to each feature manually because some of the features have very less dependency on the label so I wanted to give more weight to those features which have more dependency on the label. How can I manually give weight to each feature? Will it be possible to give weight like this? I went through different documentations I can only find how to give weight to the label. What I only find is eliminating features ranking features etc. But I wanted to give weight to each feature manually also I wanted to tune these weights (Sometimes the feature weight will be different for different scenario so I wanted to tune the weight according to that) Will it be possible ?
how to manually give weight to features using python in machine learning
0
0
0
839
48,861,309
2018-02-19T07:28:00.000
0
1
1
0
python,python-unittest
48,861,482
2
false
0
0
A plain python function has no status code. Status codes are a part protocols like of HTTP. Your test can call the function and check if the result is a dictionary. Without knowing anything more about your function this is the only thing I can suggest.
1
0
0
Say I have a python function called sample(). It takes no arguments. It returns a dictionary in the end though. How can I perform unit testing for such a function ? Can I test it with status code like 200 ? How can i test if the function is written correctly ?
How to unittest a python function that takes no arguments?
0
0
0
1,043
48,861,862
2018-02-19T08:12:00.000
0
0
1
1
python,jenkins,installation,packages
49,557,738
2
false
0
0
"sys/param.h" is known from unix/linux environments only. I am not that sure but it might be further available with GNU C as well. I have to assume something in your configuration or setup went horribly wrong. As this question is rather specific to a certain project you might have more luck finding the answer if asking this directly at their project support channels.
1
0
0
I am installing Python Jenkins package on Windows 7 x64 bit PC. I got following error: 'C1083: Cannot open include file: 'sys/param.h''. Python version 2.7.12. Any help is appreciated.
C1083: Cannot open include file: 'sys/param.h' error message while installing python jenkins package
0
0
0
1,227
48,864,357
2018-02-19T10:43:00.000
6
0
0
1
python,r,hdf5,netcdf4,ncl
48,870,492
3
true
0
0
with netcdf-c library you can: $ nccopy in.h5 out.nc
1
2
1
Is there a quick and simple way to convert HDF5 files to netcdf(4) from the command line in bash? Alternatively a simple script that handle such a conversion automatically in R, NCL or python ?
Convert hdf5 to netcdf4 in bash, R, python or NCL?
1.2
0
0
5,482
48,865,959
2018-02-19T12:11:00.000
1
1
0
0
python,algorithm
48,868,888
3
false
0
0
Actually, Your solution is not correct and here is why: Suppose you have many days where the same amount of ranges intersect, and this amount is the maximum among all others. For example: 1 -> 3 3 -> 6 6 -> 9 9 -> 10 From what I see you you have the following days (3, 6, 9) where all of them have two bills to be paid, and no other day contains more bills to be paid. Now since you can't possibly determine which day to start with, you could for example choose day 6 and pay the bills (2, 3). Next, you have no other option but to choose days 3 and 9 to pay bills 1 and 4 correspondingly. You used 3 days, while the answer is 2 choosing the first day to be 3 paying both bills 1 and 2, then choosing day 9 paying bills 3 and 4. Anyways, I'm pretty sure I have an almost linear time solution for you. First, let's make your input a little bit more clear, and add 30 (or 31 in case of 31 days month) to the second number if it is in fact smaller than the first one. Your example would look like this: 16 -> 31 2 -> 16 10 -> 25 31 -> 56 15 -> 31 My idea is based on the following 2 facts: Whenever a login is made, it is always better to pay all the bills which are available, and haven't been paid yet. When traversing the time line of the month from the beginning (day 1) to the end (day 60) it is always better to try and delay the logging process if possible; meaning that if the delay won't cause any due date to be missed. In order to do so let's first assign a unique ID to each entry: 16 -> 31 2 -> 16 10 -> 25 31 -> 56 15 -> 31 Let's use sweep line algorithm which generally solves interval related problems. Create a vector called Sweep where each element of this vector contains the following information: ID: The ID of the corresponding entry. Timer: Indicating either the first or the last day to pay a bill. Type: Just a flag. 0 means that Timer contains the first day to pay the bill number ID (first number), whereas 1 means that Timer contains the last day to pay the bill number ID (second number). For each entry insert 2 elements to Sweep vector: ID = ID of the entry, Timer = First number, Type = 0. ID = ID of the entry, Timer = Second number, Type = 1. After inserting all these elements to Sweep vector it will have a size equals to 2 x number of entries. Sort this vector increasingly based on the value of Timer, in case of a tie then increasingly based on the value of Type (In order to first check the start of an entry before its end). Traverse Sweep vector while keeping a set containing the IDs of all the unpaid bills so far, let's call this set Ready. In each step you might deal with one the following elements (based on the Type we added): Type = 0. In this case it means that you have reached the day of first being able to pay the bill number ID. Don't pay this bill yet. Instead insert its ID to our Ready set (idea 2). Type = 1. In this case check to see whether the corresponding ID is inside Ready set. If it is not just continue to the next element. If it is in fact inside Ready set this means that you have reached the last day for paying a previously unpaid bill. You have no other option but to pay this bill, alongside with all the other bills inside Ready set at this day (idea 1). By paying the bill I mean to increase the variable containing your answer by one, and if it's important to you traverse Ready set and store somewhere that all these IDs must be paid at the current day. After doing so you have paid all the ready bills, just clear Ready set (erase all the elements inside it). Every entry causes 2 elements to be inserted into the Sweep vector, and every entry will be inserted exactly once into Ready set, and deleted once as well. The cost for checking an ID inside Ready set is O(Log N) and it's done for every entry exactly once. The sorting operation is O(N Log N). Thus, your total complexity would be: O(N Log N) where N is the total number of entries you have. Python is not really my strongest programming language, so I will leave the mission of coding the mentioned algorithm up to you (in C++ for example it's not that hard to implement). Hope it helps! EDIT (thanks to @Jeff's comment) You can make your solution to be even O(N) using the following approach: Instead of iterating over the events, you could iterate over the days from 1 to 60, and keep the same handling method as I mentioned. This way we eliminated the sort operation. To remove the O(Log N) factor from the inserting and checking operation we could use a hash table as mentioned by @Jeff's comment, or instead of a hash table you could use a boolean array Visited, with a vector Ready. you will insert Ready bills to the vector. When you need to pay bills you will simply iterate over the Ready vector and mark the bills inside it as visited in their corresponding indexes inside the Visited array. Checking if I bill has been paid can be simply done by accessing the corresponding index inside the Visited array. The funny thing is that after writing my answer I came up with almost the exact same optimization as mentioned by @Jeff's comment. However, seeing that number of days is really small, I decided not to make my answer any more complex, and keep it easier to understand. Now that @Jeff mentioned the optimization I decided to add it to my answer as well. However, please note that with this optimization the overall complexity now equals to O(N + D), where N is the total number of bills, and D is the total number of days. So, if D is quite large you will actually need to stick with the first solution.
1
1
0
Below are a list of recurring monthly bills. The first number is the day of the month the bill arrives (first chance to pay it) and the second number is the due day of the month (last chance to pay it). 16, 1 2, 16 10, 25 31, 26 15, 31 The difference between the arrival and due date is always less than a month. I'm looking for an algorithm that, for any number of bills with any reception dates and any due dates, will: produce a list of fewest possible login dates to the online bank where the bills are paid. guarantee that no due dates are missed. My idea so far is to look for a single date (or date range) on which as many as possible bills are between arrival and due date, and then repeat this process until the list is empty. Is this the best approach? Is there an existing algorithm for this problem? What is it called? Code examples, if any, would be preferred in Python, PHP or just pseudo-code.
Algorithm for fewest logins to online bank
0.066568
0
0
127
48,865,970
2018-02-19T12:11:00.000
0
1
1
0
python,amazon-web-services,aws-lambda,alexa,alexa-skills-kit
48,887,689
3
false
0
0
Just to clarify: If I want to invoke Keras all I have to do is download the Keras directories and put my lambda code and Keras directories as a zip folder and upload it directly from my desktop right? Just wanted to know if this is the right method to invoke Keras.
1
2
0
This is the error I get when I try to invoke my lambda function as a ZIP file. "The file lambda_function.py could not be found. Make sure your handler upholds the format: file-name.method." What am I doing wrong?
Error when invoking lambda function as a ZIP file
0
0
0
1,338
48,866,415
2018-02-19T12:37:00.000
0
0
0
0
python,django
48,867,156
2
false
1
0
One way you can try is searching/reading the.py files in the directory and match with regex a pattern describing the distinctive django main function and package names. Might be lucrative, but... eh
1
5
0
For the last two days I am struggling with the following question: Given an absolute path (string) to a directory (inside my file system or the server, it doesn't matter), determine if this dir contains a valid Django project. First, I thought of looking for the manage.py underneath, but what if some user omit or rename this file? Secondly, I thought of locating the settings module but I want the project root, what if the settings is 2 or more levels deep? Third, I thought of locating the (standard) BASE_DIR name inside settings, but what if the user has not defined it or renamed it? Is there a way to properly identify a directory as a valid Django project? Am I missing something?
Ensure that a path to a dir is a Django project
0
0
0
119
48,866,543
2018-02-19T12:45:00.000
0
0
0
0
python,c++,swig
48,866,544
1
false
0
0
When I have been looking around, it seemed that the problem most likely is due to linking with incompatible libraries. However, I have discovered that it actually was due to using an abstract class without an implementation code. That is, SWIG seems not be able to create a Python wrapper from a class declaration without implemented methods. I put it here so that anyone else having the same problem will find it. /Tomas
1
1
0
I have a problem with loading the Python library I created by creating a Python API for a C++ project. When I load it into Python I get an error with Symbol not found:... Expected in: flat namespace... EDIT: I have given the solution to my problem below.
SWIG for C++, Symbol not found: Expected in: flat namespace
0
0
0
477
48,866,753
2018-02-19T12:57:00.000
0
0
0
1
python,node.js
48,867,119
2
false
0
0
It's bad practice to give sudo, as a hacker could do anything if there is any security issues. You could give the user witch runs the web server the permission to do the task your task is intending to do. In general try to avoid root whenever you can.
2
1
0
The script create some files in directories which need sudo permissions and executes few command that also need sudo privillage. I want to execute that script giving sudo privillage. Is there any way to do that ? I am trying to execute it with python-shell module as well as spawn child process.
How to execute a Python script in Node.js with sudo privillege
0
0
0
774
48,866,753
2018-02-19T12:57:00.000
1
0
0
1
python,node.js
48,942,162
2
true
0
0
I never got any answer on it, So I researched it on my own. The besy way to run any shell command or script is by using node-cmd moudle. It works soo bright . Just run the node script with sudo privillege, and you are good to go .
2
1
0
The script create some files in directories which need sudo permissions and executes few command that also need sudo privillage. I want to execute that script giving sudo privillage. Is there any way to do that ? I am trying to execute it with python-shell module as well as spawn child process.
How to execute a Python script in Node.js with sudo privillege
1.2
0
0
774
48,867,800
2018-02-19T13:58:00.000
9
0
0
0
python,tkinter,treeview,focus,ttk
48,867,937
2
false
0
1
ttk.treeview.focus() returns the current focus item. That means the item that was last selected. The function you are looking for is ttk.treeview.selection(). This returns a tuple of the selected items.
1
3
0
ttk.treeview.focus() returns the iid of a single line. The treeview box allows you to select multiple lines. How do I get a list of iids for the selected lines?
Tk Treeview Focus(). How do I Get Multiple Selected Lines?
1
0
0
6,125
48,870,318
2018-02-19T16:29:00.000
0
0
0
0
python,shell,wlst
48,902,430
1
false
1
0
Issue has been resolved, wlst was bypassing the python libraries, which caused the issue. imported libraries with namespace fixed the issue.
1
0
0
we are trying to pass long value to the WLDFAccessRuntime (mbean of weblogic), attribute is EarliestAvailableTimestamp & LatestAvailableTimestamp, which expects 'L' at the end. EarliestAvailableTimestamp (Default Value) cmo.getEarliestAvailableTimestamp() 1509097885002L But we are trying to change the value via wlst script a=1234 b=long(a) -- this value is passed to EarliestAvailableTimestamp() Though its a long, but its not giving L at the end, so EarliestAvailableTimestamp() is not accepting the value. Eg: s=1519056698455 e=1519057598000 script value: cursorname = cmo.openCursor(long(s),long(e),"") print cursorname CUSTOM/com.bea.wli.monitoring.sla.alertIterator-25--9159200561733388375 Maually Entered value: cursorname = cmo.openCursor(1519056698455L,1519057598000L,"") print cursorname CUSTOM/com.bea.wli.monitoring.sla.alertIterator-26-6422683192499293139 Both the cursorname value should be same.
Need to pass long value to a variable to WLST, which expects L at end
0
0
0
121
48,876,711
2018-02-20T01:44:00.000
0
0
0
0
python,flask,virtualenv
48,877,242
4
false
1
0
You may want to look at using a requirements.txt file in Python. Using $ pip freeze > requirements.txt can build that file with what pip has installed in your virtualenv.
1
0
0
I developed a flask app running on virtualenv, how do I deploy it into production? I have a Red Hat Enterprise Linux Server release 5.6, cannot use docker. The server has cgi and wsgi setup. Python 2.7. I know using the pip install -r requirements.txt, but how do I get the virtualenv to persist on production once my session is terminated? I am using source x../venv/bin/activate export FLASK_APP=myapp.py flask run --host=0.0.0.0 --port=8082 and this will allow me to access myurl:8082 How do I present a way for other users once I terminate session?
How to deploy flask virtualenv into production
0
0
0
2,680
48,877,570
2018-02-20T03:50:00.000
0
0
0
0
python,django,pycharm
48,877,702
1
false
1
0
you do not need to go to Create New -> Data Source -> SQlite(Xerial). if your setting.py database config is as is ('ENGINE': 'django.db.backends.sqlite3') the database is autogenerated when you run makemigrations then migrate. to recreate the database (you said you deleted): remove previous migrations (delete all files in your app's migration folder except the init.py) Ctrl+Alt+R (or tools -> run manage.py) then in the manage.py terminal run makemigrations andmigrate` a new database will be created and migrations applied......... you dont have to worry about seeing the entries in the database directly. If you're able to create superuser and login to the admin site and manipulate model data, then you're up and running
1
0
0
I really am at a loss here. I am using Pycharm 5.0.4 and running a virtual env with Python3 and Django 2.0.1. I am trying to get my database up and running for my project, but no matter what I do I cannot seem to get anything to show up in the database tool window drop down in Pycharm. I have 'ENGINE': 'django.db.backends.sqlite3' set in my settings.py, and in Pycharm i am going to: Create New -> Data Source -> SQlite(Xerial). I then makemigrations and migrate but nothing shows up in the database. I can even go to the project website and succesfully add/create models in the admin site. But I cannot figure out where they are or see them... It was working at one point but I deleted my database because I was getting some errors and now I am trying to recreate it.
unable to see my anything in database pycharm
0
1
0
331
48,878,730
2018-02-20T05:55:00.000
2
0
1
0
python
48,878,828
2
false
0
0
In programming, function refers to a segment that groups code to perform a specific task. A module is a software component or part of a program that contains one or more routines. That means, functions are groups of code, and modules are groups of classes and functions.
1
0
0
I am a beginner of python i didn't found any difference between function and module.it is said that module stores code even after shutting the shell off rather than function.but when i tried to do so it didn't work for me.SO what is the big deal of using a module rather than function in the programming
What is the difference between python function and python module
0.197375
0
0
11,462
48,879,495
2018-02-20T06:56:00.000
2
0
1
0
python,recursion,data-structures,intel,google-colaboratory
48,922,199
4
false
0
0
There is no way to request more CPU/RAM from Google Colaboratory at this point, sorry.
1
1
0
I use GoogleColab to test data stuctures like chain-hashmap,probe-hashmap,AVL-tree,red-black-tree,splay-tree(written in Python),and I store very large dataset(key-value pairs) with these data stuctures to test some operation running time,its scale just like a small wikipedia,so run these python script will use very much memory(RAM),GoogleColab offers a approximately 12G RAM but not enough for me,these python scripts will use about 20-30G RAM,so when I run python program in GoogleColab,will often raise an exception that"your program run over 12G upper bound",and often restarts.On the other hand,I have some PythonScript to do some recursion algorithm,as is seen to all,recursion algorithm use CPU vety mush(as well as RAM),when I run these algorithm with 20000+ recursion,GoogleColab often fails to run and restart,I knew that GoogleColab uses two cores of Intel-XEON CPU,but how do I apply more cores of CPU from Google?
How to apply GoogleColab stronger CPU and more RAM?
0.099668
0
0
13,142
48,880,273
2018-02-20T07:57:00.000
2
0
0
0
python,tensorflow,neural-network,keras,multilabel-classification
49,065,611
2
false
0
0
You're on the right track. Usually, you would either balance your data set before training, i.e. reducing the over-represented class or generate artificial (augmented) data for the under-represented class to boost its occurrence. Reduce over-represented class This one is simpler, you would just randomly pick as many samples as there are in the under-represented class, discard the rest and train with the new subset. The disadvantage of course is that you're losing some learning potential, depending on how complex (how many features) your task has. Augment data Depending on the kind of data you're working with, you can "augment" data. That just means that you take existing samples from your data and slightly modify them and use them as additional samples. This works very well with image data, sound data. You could flip/rotate, scale, add-noise, in-/decrease brightness, scale, crop etc. The important thing here is that you stay within bounds of what could happen in the real world. If for example you want to recognize a "70mph speed limit" sign, well, flipping it doesn't make sense, you will never encounter an actual flipped 70mph sign. If you want to recognize a flower, flipping or rotating it is permissible. Same for sound, changing volume / frequency slighty won't matter much. But reversing the audio track changes its "meaning" and you won't have to recognize backwards spoken words in the real world. Now if you have to augment tabular data like sales data, metadata, etc... that's much trickier as you have to be careful not to implicitly feed your own assumptions into the model.
1
8
1
I'm trying to build a multilabel-classifier to predict the probabilities of some input data being either 0 or 1. I'm using a neural network and Tensorflow + Keras (maybe a CNN later). The problem is the following: The data is highly skewed. There are a lot more negative examples than positive maybe 90:10. So my neural network nearly always outputs very low probabilities for positive examples. Using binary numbers it would predict 0 in most of the cases. The performance is > 95% for nearly all classes, but this is due to the fact that it nearly always predicts zero... Therefore the number of false negatives is very high. Some suggestions how to fix this? Here are the ideas I considered so far: Punishing false negatives more with a customized loss function (my first attempt failed). Similar to class weighting positive examples inside a class more than negative ones. This is similar to class weights but within a class. How would you implement this in Keras? Oversampling positive examples by cloning them and then overfitting the neural network such that positive and negative examples are balanced. Thanks in advance!
Classification: skewed data within a class
0.197375
0
0
1,027
48,880,508
2018-02-20T08:13:00.000
0
0
1
0
python,multithreading,python-2.7,asynchronous
48,881,300
2
true
0
0
Well, you have couple of options: Use multiprocessing.pool.ThreadPool (Python 2.7) where you create pool of threads and then use them for dispatching requests. map_async may be of interest here if you want to make async requests, Use concurrent.futures.ThreadPoolExecutor (Python 3) with similar way of working with ThreadPool pool and yet it is used for asynchronously executing callables, You even have option of using multiprocessing.Pool, but I'm not sure if that will give you any benefit since everything you will be doing is I/O bound, so threads should do just fine, You can make asynchronous requests with Twisted or asyncio but that may require a bit more learning if you are not accustomed to asynchronous programming.
1
0
0
In order to test our server we designed a test that sends a lot of requests with JSON payload and compares the response it gets back. I'm currently trying to find a way to optimize the process by using multi threads to do so. I didn't find any solution for the problem that I'm facing though. I have a url address and a bunch of JSON files (these files hold the requests, and for each request file there is an 'expected response' JSON to compare the response to). I would like to use multi threading to send all these requests and still be able to match the response that I get back to the request I sent. Any ideas?
using threading for multiple requests
1.2
0
1
177
48,882,088
2018-02-20T09:41:00.000
0
0
0
0
python,tensorflow
48,882,154
2
false
0
0
Your computer seems to be incompatible with the library tensorflow. Your computer needs to be able to use FMA instructions but can't.
2
0
1
Python terminal getting abort with following msg: /grid/common//pkgs/python/v2.7.6/bin/python Python 2.7.6 (default, Jan 17 2014, 04:05:53) [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2 Type "help", "copyright", "credits" or "license" for more information. import tensorflow as tf 2018-02-20 01:40:11.268134: F tensorflow/core/platform/cpu_feature_guard.cc:36] The TensorFlow library was compiled to use FMA instructions, but these aren't available on your machine. Abort
import tensorflow with python 2.7.6
0
0
0
230
48,882,088
2018-02-20T09:41:00.000
0
0
0
0
python,tensorflow
48,882,437
2
false
0
0
You need to compile TensorFlow on the same computer.
2
0
1
Python terminal getting abort with following msg: /grid/common//pkgs/python/v2.7.6/bin/python Python 2.7.6 (default, Jan 17 2014, 04:05:53) [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2 Type "help", "copyright", "credits" or "license" for more information. import tensorflow as tf 2018-02-20 01:40:11.268134: F tensorflow/core/platform/cpu_feature_guard.cc:36] The TensorFlow library was compiled to use FMA instructions, but these aren't available on your machine. Abort
import tensorflow with python 2.7.6
0
0
0
230
48,883,888
2018-02-20T11:08:00.000
0
0
0
0
opencv,python-tesseract
48,884,112
1
false
0
0
Localize your detection by setting the rectangles where Tesseract has to look. You can then restrict according to rectangle which type of data is present at that place example: numerical,alphabets etc.You can also make a dictionary file for tesseract to improve accuracy(This can be used for detecting card holder name by listing common names in a file). If there is disturbance in the background then design a filter to remove it. Good Luck!
1
0
1
I am working on pytesseract. I want to read data from Driving License kind of thing. Presently i am converting .jpg image to binary(gray scale) format using opencv but i am not accurate result. How do you solve this? Is there any standard size of image?
pytesseract - Read text from images with more accuracy
0
0
0
424
48,886,359
2018-02-20T13:25:00.000
3
0
1
0
python,string,optimization,coding-style
48,886,714
2
true
0
0
Maybe my_string[:-i or None]? Because -0 equals to 0, maybe it is more elegent way to convert 0 into None, that's the solution above.
1
2
0
It often happens that we need to truncate the end of a string by a certain amount. The correct way to do this is my_string[:-i]. But if your code allows i to be 0, this tuncate the whole string. The solution I generally use is to do my_string[:len(my_string)-i], which works perfectly fine. Although I have always found that a bit ugly. Is there a more elegant way to achieve that behaviour?
Truncate zero characters from the end of a string
1.2
0
0
60
48,890,390
2018-02-20T16:56:00.000
0
0
0
0
python,opencv,camera,background-subtraction,opencv-contour
50,014,778
3
false
0
1
A possible cause for this error could be mild jitters in the frame that occur due to mild shaking of the camera If your background subtraction algorithm isn't tolerant enough to low-value colour changes, then a tamper alert will be triggered even if you shake the camera a bit. I would suggest using MOG2 for background subtraction
1
0
1
I'm trying to detect camera tampering (lens being blocked, resulting in a black frame). The approach I have taken so far is to apply background subtraction and then finding contours post thresholding the foreground mask. Next, I find the area of each contour and if the contour area is higher than a threshold value (say, larger than 3/4th of the camera frame area), then the camera is said to be tampered. Although upon trying this approach, there is a false tamper alert, even when the camera captures full view. Not sure, how to go about this detection. Any help shall be highly appreciated
detecting when the camera view is blocked (black frame)
0
0
0
1,009
48,890,843
2018-02-20T17:20:00.000
0
0
0
1
python,amazon-web-services,erpnext
48,911,671
1
true
1
0
This is due to fail2ban which is treating no.of requests from a LAN connection as bruteforce attack and hence blocking the further requests. After purging fail2ban everything worked fine.
1
0
0
We have an ERPNext open software which is working fine on AWS Large Instance of 500 GB HDD. Recently, as it is an excess configuration for our usage, we downgraded to Medium Instance and 20 GB HDD. Also, we have changed the OS, from Ubuntu 14.x to 16.x. So the python version also got changed(i.e., Python 2.7.6 to Python 2.7.12). From then, we are facing a frequent disconnecting issue i.e., it shows site not found when we access the site. We are using Elastic IP. We upgraded to a Large instance and the same issue still continuous. So it is not an instance issue. I feel this is weird because, when few people at my office, says it is not working, I can access the site from my home at the same point of the time. But, they are able to access other sites, except this ERP site. I'm not getting what caused the problem. Can anyone help me with this? Is this a problem with AWS or IP issues or Ubuntu 16.04 not supporting ERPNext or Python Version? I've left with no clue. Any help is greatly appreciated. Thanks.
ERPNext on AWS server disconnectng frequently
1.2
0
0
77
48,891,538
2018-02-20T18:02:00.000
-1
0
0
0
python,scikit-learn,one-hot-encoding
64,678,289
3
false
0
0
Basically first we need to apply fit_transform for the base data and next apply transform for the sample data, so sample data also will get the exact no.of columns w.r.t base data.
1
3
1
I have a dataset with a category column. In order to use linear regression, I 1-hot encode this column. My set has 10 columns, including the category column. After dropping that column and appending the 1-hot encoded matrix, I end up with 14 columns (10 - 1 + 5). So I train (fit) my LinearRegression model with a matrix of shape (n, 14). After training it, I want to test it on a subset of the training set, so I take only the 5 first and put them through the same pipeline. But these 5 first only contain 3 of the categories. So after going through the pipeline, I'm only left with a matrix of shape (n, 13) because it's missing 2 categories. How can I force the 1-hot encoder to use the 5 categories ? I'm using LabelBinarizer from sklearn.
One-hot-encoding with missing categories
-0.066568
0
0
2,335
48,891,679
2018-02-20T18:11:00.000
2
0
0
0
python,elasticsearch,nlp
48,898,306
1
true
0
0
Your objective requires that you perform part of speech tagging on your query, and then use those tags to identify nouns. You would then need to compare the extracted nouns to a pre-curated list of food strings and, after identifying those that are not food, remove the clauses of which those nouns are the subject and /or the phrases of which they are the object. This functionality is not built into elasticsearch. Depending on what language you are processing your queries with, there are various libraries for part of speech tagging and string manipulation. Updated answer: Just read through this and realized this answer isn't very good. The best way to solve this problem is with document/phrase vectorization. Vectorized properly, you should be able to encode the noun phrases 'Blueberry' and 'Blueberry dishwashing soap' as very different vectors, and then you can take all sorts of approaches as far as inferring classifications from those vectors.
1
1
0
We are currently trying to process user input and checking if user has entered a food item using elastic search. With elastic search we are able to get results for wide range of terms: Garlic , Garlic Extract etc... How should we handle use cases E.g. Blueberry Dish-washing soap Or Apple based liquid soap . How do we omit these searches ? As I search Blueberry Dish-washing soap I still get search results related to Blueberry
How to filter out elastic searches for invalid inputs
1.2
0
1
35
48,895,898
2018-02-20T23:31:00.000
0
0
1
0
python-3.x,tensorflow,virtualenv,python-3.4,virtualenvwrapper
49,717,503
1
true
0
0
I had faced a similar issue for the same hardware. If i am guessing right and you are following the same set of install instructions , install the. Whl for tensorflow without using sudo as using the sudo even from inside the virtual environment installs it in the place as seen by the root directory and not inside the vital environment.
1
0
1
I have installed tensorflow and opencv on odroid xu4. Tensorflow was installed using a .whl file for raspberry pi and it built successfully. Opencv was built successfully inside virtualenv environment. I can import opencv as import cv2 from inside virtual environment for python but not tensorflow. Tensorflow is getting imported from outside virtual environment even though .whl file for the same was run from inside the virtual environment. I have researched a lot regarding this and couldn't figure out a solution to make tensorflow work from inside virtualenv. These are the things i know. 1) I know from where python3 is importing tensorflow when run outside the virtualenv 2) I know from where python3 is accessing all the packages from inside the virtualenv. 3) I am unable to import tensorflow from python inside the virtualenv 4)virtualenv was configured for python3. 5)importing OpenCV works fine from inside virtualenv. Can someone please suggest how to link python3 when run inside virtualenv to also look for the directory of tensorflow which i know.
How to add directory to a python running inside virtualenv
1.2
0
0
255
48,896,407
2018-02-21T00:27:00.000
2
0
0
0
javascript,python,html,ajax,google-app-engine
48,897,992
2
false
1
0
Best Practice would be for the the script to not take 10-15 seconds. What is your script doing? Is it generating something that you can pre-compute and cache or save in Google Cloud Storage? If you're daisy-chaining datastore queries together, is there something you can do to make them happen async in tandem? If it really has to take 10-15 seconds, then I'd say option 2 is must: User clicks a link > html page is immediately returned (with progress bar) > AJAX post request to the server side > complete script > return result to html.
2
3
0
What is the current best practice and method of loading a webpage (that has 10 - 15 seconds worth of server side script). User clicks a link > server side runs > html page is returned (blank page for 10 - 15 seconds). User clicks a link > html page is immediately returned (with progress bar) > AJAX post request to the server side > complete script > return result to html. Other options (threading?) I am running Google App Engine (Python) Standard Environment.
Best practice for loading webpage with long server side script
0.197375
0
1
73
48,896,407
2018-02-21T00:27:00.000
1
0
0
0
javascript,python,html,ajax,google-app-engine
48,899,196
2
true
1
0
The way we're doing it is using the Ajax approach (the second one) which is what everyone else does. You can use Task Queues to run your scripts asynchronously and return the result to front end using FCM (Firebase Cloud Messaging). You should also try to break the script into multiple task queues to make it run faster.
2
3
0
What is the current best practice and method of loading a webpage (that has 10 - 15 seconds worth of server side script). User clicks a link > server side runs > html page is returned (blank page for 10 - 15 seconds). User clicks a link > html page is immediately returned (with progress bar) > AJAX post request to the server side > complete script > return result to html. Other options (threading?) I am running Google App Engine (Python) Standard Environment.
Best practice for loading webpage with long server side script
1.2
0
1
73