Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
42,257,535
2017-02-15T18:39:00.000
1
0
1
0
python,python-3.x,pycharm
43,207,326
1
true
0
0
The default interpreter (and the default settings in general) apply on newly created projects, if you want to change an already created project settings you need to: File > Settings (or Ctrl+Alt+s as shortcut)> Project: > Project interpreter In the project interpreter dropdown list you can specify your interpreter by selecting the appropriate from the list In here you can even create a new virtual environment for your project by clicking on ⚙ > Create VirtualEnv
1
1
0
I've already changed the default project interpreter in PyCharm under File | Default Settings | Project Interpreter to Python 3.6, but when I try to write variable annotations (e.g. int: x = 6) PyCharm complains that Python version 3.4 does not support variable annotations, as Python 3.4 was the former interpreter I was using. How do I change the syntax check to that of Python 3.6? Or any other interpreter, for that matter.
Change project interpreter after creation Pycharm
1.2
0
0
2,446
42,258,224
2017-02-15T19:17:00.000
24
0
1
0
python,python-3.x,class
42,258,503
1
true
0
0
Let's break them down: class foo: Python 3: It's usually the way to go. By default, Python adds object as the base class for you. Python 2: It creates an old style classobj that will cause you all sorts of headaches. class foo(): Python 3 and Python 2: Similar to class foo for both Python versions, trim it off, it looks ugly and makes no difference. class foo(object): Python 3 and Python 2: In both Pythons, results in a new style class that has all the goodies most know. People usually use this form when writing code that might be used in Python 2 too, explicitly inheriting from object causes the class to be new style in Python 2 and makes no difference in 3 (apart from some extra typing).
1
13
0
I noticed all 3 -> class foo, class foo() and class foo(object) can be used but i am confused as to what is the difference between these 3, if there is any? (I mean in properties mainly, python3)
Difference between class foo , class foo() and class foo(object)?
1.2
0
0
2,917
42,258,274
2017-02-15T19:20:00.000
4
0
1
0
python-3.x,sockets
42,260,182
1
true
0
0
It will make little difference to the sending operation. (I assume you are using a TCP socket for the purposes of this discussion.) When you attempt to send 1K, the kernel will take that 1K, copy it into kernel TCP buffers, and return success (and probably begin sending to the peer at the same time). At which point, you will send another 1K and the same thing happens. Eventually if the file is large enough, and the network can't send it fast enough, or the receiver can't drain it fast enough, the kernel buffer space used by your data will reach some internal limit and your process will be blocked until the receiver drains enough data. (This limit can often be pretty high with TCP -- depending on the OSes, you may be able to send a megabyte or two without ever hitting it.) If you try to send in one shot, pretty much the same thing will happen: data will be transferred from your buffer into kernel buffers until/unless some limit is reached. At that point, your process will be blocked until data is drained by the receiver (and so forth). However, with the first mechanism, you can send a file of any size without using undue amounts of memory -- your in-memory buffer (not including the kernel TCP buffers) only needs to be 1K long. With the sendall approach, file.read() will read the entire file into your program's memory. If you attempt that with a truly giant file (say 40G or something), that might take more memory than you have, even including swap space. So, as a general purpose mechanism, I would definitely favor the first approach. For modern architectures, I would use a larger buffer size than 1K though. The exact number probably isn't too critical; but you could choose something that will fit several disk blocks at once, say, 256K.
1
3
0
I have python code that sends data to socket (a rather large file). Should I divide it into 1kb chunks, or would just conn.sendall(file.read()) be acceptable?
Should I send data in chunks, or send it all at once?
1.2
0
1
1,254
42,260,538
2017-02-15T21:36:00.000
3
0
0
0
python,loops,csv,time,export-to-csv
42,260,728
2
true
0
0
How do I best write the data of the polygons that I create into the csv? Do I open the csv at the beginning and then write each row into the file, as I iterate over classes and images? I suspect most folks would gather the data in a list or perhaps dictionary and then write it all out at the end. But if you don't need to do additional processing to it, yeah -- send it to disk and release the resources. And I guess writing the data into the csv right away would result in less RAM used, right? Yes, it would but it's not going to impact CPU usage; just reduce RAM usage though it does depend on when Python GC them. You really shouldn't worry about details like this. Get accurate output, first and foremost.
1
0
1
I am currently working on the Dstl satellite kaggle challenge. There I need to create a submission file that is in csv format. Each row in the csv contains: Image ID, polygon class (1-10), Polygons Polygons are a very long entry with starts and ends and starts etc. The polygons are created with an algorithm, for one class at a time, for one picture at a time (429 pictures, 10 classes each). Now my question is related to computation time and best practice: How do I best write the data of the polygons that I create into the csv? Do I open the csv at the beginning and then write each row into the file, as I iterate over classes and images? Or should I rather save the data in a list or dictionary or something and then write the whole thing into the csv file at once? The thing is, I am not sure how fast the writing into a csv file is. Also, as the algorithm is already rather consuming computationally, I would like to save my pc the trouble of keeping all the data in the RAM. And I guess writing the data into the csv right away would result in less RAM used, right? So you say that disc operations are slow. What exactly does that mean? When I write into the csv each row live as I create the data, does that slow down my program? So if I write a whole list into a csv file that would be faster than writing a row, then again calculating a new data row? So that would mean, that the computer waits for an action to finish before the next action gets started, right? But then still, what makes the process faster if I wait for the whole data to accumulate? Anyway the same number of rows have to be written into the csv, why would it be slower if I do it line by line?
Writing data into a CSV within a loop Python
1.2
0
0
705
42,262,714
2017-02-16T00:34:00.000
0
0
1
0
python,pygame,compatibility
42,279,070
1
false
0
1
I know it says python 3.2 on the packages but I am currently running python 3.6 and it works fine. Just download python 3.6 and install pygame.
1
0
0
I'm using PyGame, so I backdated to Python 3.2. I would like to use a number generator that's cryptographically secure in my game, because I can. I tried importing the secrets module, but it's not in the 3.2 Python version. How do I use both the PyGame and secrets modules in the same program? Maybe by scripting a switch in Python versions in a function?
Using secrets module with PyGame
0
0
0
243
42,264,022
2017-02-16T03:00:00.000
1
0
1
0
python,python-2.7
42,289,440
1
true
0
0
Call win32file.SetEndOfFile(handle) after positioning the file handle to the offset that you want to be the new end of file. This is similar to the ftruncate POSIX system call, or writing 0 bytes in DOS.
1
0
0
How do you truncate a PyHandle returned by win32file.CreateFile. I know you can open it with the TRUNCATE_EXISTING flag, but how do you truncate it to a specific size after reading/writing? Note: The reason I cannot use the standard library is because I'm using win32file to restrict simultaneous reading/writing to a file.
Truncate PyHandle (win32file)
1.2
0
0
63
42,264,307
2017-02-16T03:28:00.000
1
0
0
1
swift,python-2.7
56,529,866
2
false
0
0
For me, Apple Swift is under /usr/bin/swift and python-swiftclient is under /usr/bin/local/swift. Explicitly invoking it as /usr/bin/local/swift works.
1
0
0
I have installed OpenStack swift python client (pip install python-swiftclient). However /usr/bin has swift executable (which I can not remove as it is owned by root) and is overriding python swift. Requirement already satisfied: python-swiftclient in /Library/Python/2.7/site-packages Requirement already satisfied: requests>=1.1 in /Library/Python/2.7/site-packages (from python-swiftclient) Requirement already satisfied: six>=1.5.2 in /Library/Python/2.7/site-packages/six-1.10.0-py2.7.egg (from python-swiftclient) Requirement already satisfied: futures>=3.0; python_version == "2.7" or python_version == "2.6" in /Library/Python/2.7/site-packages (from python-swiftclient) However, I am unable to find python swift anywhere. Please let me know how to resolve this. Many Thanks Chen
Apple Swift is overriding Openstack swift package
0.099668
0
0
416
42,265,676
2017-02-16T05:28:00.000
0
0
0
0
python,sockets,tcp
42,266,135
1
true
0
0
There is no way to do this on the level of the TCP socket interface available in Python since the OS kernel does the connection setup already before the applications returns from accept. You would need to handle this outside of the application with firewall rules or use raw sockets or a user space network stack where you are not restricted to how connections are handled in the kernel and what the socket interface offers.
1
0
0
I am looking to simulate a TCP server, where I would want to reject connection with different error codes in ICMP message. Currently, the issue is even before it reaches handle_accept() in sockets SYN,ACK would have already reached to the server, and I can reject the connection with ICMP errors! Did anybody ever tried it? Is there any other way to do it? Thanks in Advance!
Reject TCP SYN with ICMP error messages in python
1.2
0
1
125
42,267,553
2017-02-16T07:30:00.000
0
1
0
0
mysql,python-2.7,amazon-web-services,aws-lambda
42,268,813
3
false
0
0
You should install your packages in your lambda folder : $ pip install YOUR_MODULE -t YOUR_LAMBDA_FOLDER And then, compress your whole directory in a zip to upload in you lambda.
1
1
0
I want to import and use dataset package of python at AWS Lambda. The dataset package is about MySQL connection and executing queries. But, when I try to import it, there is an error. "libmysqlclient.so.18: cannot open shared object file: No such file or directory" I think that the problem is because MySQL client package is necessary. But, there is no MySQL package in the machine of AWS Lambda. How to add the third party program and how to link that?
How to use the package written by another language in AWS Lambda?
0
1
0
131
42,270,739
2017-02-16T10:07:00.000
0
0
1
0
python,tensorflow
42,642,759
7
false
0
0
Those are simply warnings. They are just informing you if you build TensorFlow from source it can be faster on your machine. Those instructions are not enabled by default on the builds available I think to be compatible with more CPUs as possible.
2
14
1
I just installed Tensorflow 1.0.0 using pip. When running, I get warnings like the one shown below. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. I get 5 more similar warning for SSE4.1, SSE4.2, AVX, AVX2, FMA. Despite these warnings the program seems to run fine.
How do I resolve these tensorflow warnings?
0
0
0
10,737
42,270,739
2017-02-16T10:07:00.000
0
0
1
0
python,tensorflow
42,539,825
7
false
0
0
It would seem that the PIP build for the GPU is bad as well as I get the warnings with the GPU version and the GPU installed...
2
14
1
I just installed Tensorflow 1.0.0 using pip. When running, I get warnings like the one shown below. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. I get 5 more similar warning for SSE4.1, SSE4.2, AVX, AVX2, FMA. Despite these warnings the program seems to run fine.
How do I resolve these tensorflow warnings?
0
0
0
10,737
42,271,330
2017-02-16T10:32:00.000
0
1
0
1
python,cron,crontab
42,271,741
2
true
0
0
Simple solution is, you can set some Bash env variable MONITORING=true and let your python script to check that variable using os.environ["MONITORING"]. If that variable is true then check if the server is up or down else don't check anything. Once server down is found, set that variable to false from script like os.environ["MONITORING"] = false. So it won't send emails until you set that env variable again true.
2
0
0
I have python script that checks if the server is up or down, and if it's down it sends out an email along with few system logs. What I want is to keep checking for the server every 5 minutes, so I put the cronjob as follows: */5 * * * * /python/uptime.sh So whenever the server's down, it sends an email. But I want the script to stop executing (sending more emails) after the first one. Can anyone help me out with how to do this? Thanks.
Running cronjob every 5 minutes but stopped after first execution?
1.2
0
0
227
42,271,330
2017-02-16T10:32:00.000
0
1
0
1
python,cron,crontab
42,295,869
2
false
0
0
write an empty While True script that runs forever (ex: "mailtrigger.py") run it with -nohup mailtrigger.py from shell in infinite loop once the server is down check if mailtrigger.py is running, if its not then terminate mailtrigger.py (kill process id) your next iterations will not send mails since mailtrigger.py is not running.
2
0
0
I have python script that checks if the server is up or down, and if it's down it sends out an email along with few system logs. What I want is to keep checking for the server every 5 minutes, so I put the cronjob as follows: */5 * * * * /python/uptime.sh So whenever the server's down, it sends an email. But I want the script to stop executing (sending more emails) after the first one. Can anyone help me out with how to do this? Thanks.
Running cronjob every 5 minutes but stopped after first execution?
0
0
0
227
42,274,756
2017-02-16T13:02:00.000
2
0
0
0
python,machine-learning,3d,tensorflow,scikit-learn
42,284,733
1
false
0
0
You have to first extract "features" out of your dataset. These are fixed-dimension vectors. Then you have to define labels which define the prediction. Then, you have to define a loss function and a neural network. Put that all together and you can train a classifier. In your example, you would first need to extract a fixed dimension vector out of an object. For instance, you could extract the object and project it on a fixed support on the x, y, and z dimensions. That defines the features. For each object, you'll need to label whether it's convex or concave. You can do that by hand, analytically, or by creating objects analytically that are known to be concave or convex. Now you have a dataset with a lot of sample pairs (object, is-concave). For the loss function, you can simply use the negative log-probability. Finally, a feed-forward network with some convoluational layers at the bottom is probably a good idea.
1
1
1
I try to write an script in python for analyse an .stl data file(3d geometry) and say which model is convex or concave and watertight and tell other properties... I would like to use and TensorFlow, scikit-learn or other machine learning library. Create some database with examples of objects with tags and in future add some more examples and just re-train model for better results. But my problem is: I don´t know how to recalculate or restructure 3d data for working in ML libraries. I have no idea. Thank you for your help.
How to analyse 3d mesh data(in .stl) by TensorFlow
0.379949
0
0
1,021
42,281,212
2017-02-16T17:48:00.000
3
0
0
0
python,mysql,flask
42,281,576
1
false
1
0
Using packages like flask-mysql or Flask-SQLAlchemy, they provided useful defaults and extra helpers that make it easier to accomplish common CRUD tasks. All of such package are good at handling relationships between objects. You only need to create the objects and then the objects contain all the functions and helpers you needed to deal with the database, you don't have to implement such code by yourself and you don't need to worry about the performance of the queries. I had worked on a Django project(I believe the theory in Flask is similar) and its ORM is really amazing, all i need to do is writing Models and encapsulate business logic. All CRUD commands are handled by the built-in ORM, as a developer we don't worry about the SQL statements. Another benefit is that it makes database migration much easier. You can switch it from MySQL to PostgresSQL with minimal code modifications which will speed up development.
1
1
0
I noticed that most examples for accessing mysql from flask suggest using a plugin that calls init_app(app). I was just wondering why that is as opposed to just using a mysql connector somewhere in your code as you need it? Is it that flask does better resource management with request life cycles?
accessing mysql from within flask
0.53705
1
0
100
42,289,465
2017-02-17T04:34:00.000
2
0
0
0
python,google-app-engine,debugging,stackdriver
42,302,731
1
true
1
0
This usually happens when you try to take a snapshot in source code that is not part of the executing service/version. For example when the code you are using belongs to another running service in the same project.. Please use the console Feedback tool or email [email protected] with this issue. We will help you figure this one out. thanks, .Erez
1
2
0
I am trying to debug and make a capture in the Stackdriver debug appengine tool, it shows me the code, including the error line in StackDriver but when I try to make a capture after a few seconds the message appears in red: "python module not found". Any ideas?
Stackdriver debug appengine error: python module not found
1.2
0
0
120
42,290,182
2017-02-17T05:36:00.000
1
0
0
0
python,tensorflow,neural-network
42,292,153
2
false
0
0
Do they have always two labels? If so try "label1-label2" as one label. Or simply build two networks, one for label 1 and the other for label 2. Are they hierarchical labels? Then, check out Hierarchical classifiers.
2
1
1
I want to implememt multitask Neural Network in tensorflow, for which I need my input as: [image label1 label2] which I can give to the neural network for training. My question is, how can I associate more than one label with image in TFRecord file? I currently was using build_image_data.py file of inception model for genertrating TFRecord file but in that cases there is just one label per image.
How to give more than one labels with an image in tensorflow?
0.099668
0
0
249
42,290,182
2017-02-17T05:36:00.000
0
0
0
0
python,tensorflow,neural-network
42,298,590
2
false
0
0
I got this working. For any one looking for reference, you can modify Example proto of build_image_data.py file and associate it with two labels. :)
2
1
1
I want to implememt multitask Neural Network in tensorflow, for which I need my input as: [image label1 label2] which I can give to the neural network for training. My question is, how can I associate more than one label with image in TFRecord file? I currently was using build_image_data.py file of inception model for genertrating TFRecord file but in that cases there is just one label per image.
How to give more than one labels with an image in tensorflow?
0
0
0
249
42,291,191
2017-02-17T06:49:00.000
0
0
0
0
python,xpath,web-scraping,scrapy,web-crawler
42,297,186
2
true
1
0
You are wrong. Scrapy cannot manipulate real browser-like behavior. From the image you linked, I saw you are scraping Amazon, so open that link in browser, and click on checkbox, you will notice the URL in browser will also change according to new filter set. And then put that URL in scrapy code and do your scraping. IF YOU WANT TO MANIPULATE REAL BROWSER-LIKE BEHAVIOR use Python Selenium or PhantomJS or CasperJS.
1
3
0
I need to scrape a url which has checkboxes in it. I wanna click some of the checkboxes and scrape and I wanna scrape again with someother checkboxes clicked. For instance; I wanna click new and then scrape and then I wanna scrape the same url with Used and Very Good clicked. Is there a way to do this without making more than 1 request which is done for getting the url. I guess html changes when you click one of the boxes since the listing will change when you refine the search. Any thoughts? Any suggestions? Best, Can
Scrapy - How to put a check into checkboxes in a url then scrape
1.2
0
1
1,305
42,295,171
2017-02-17T10:12:00.000
0
0
0
1
tkinter,python-3.6
42,347,600
1
false
0
1
If you want to install tkinter in order to use matplotlib you may try import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt It worked for me
1
3
0
I have started to play with Python and I went directly to Python 3.6. I have two Python environments now in my system: Python 2..6.6 and Python 3.6 Python 2.6.6 is under: which python /usr/bin/python And Python 3.6 is under /opt/python3/bin My problem is that if I try to import tkinter in Python 3.6 it does not work: ./python3.6 Python 3.6.0 (default, Feb 16 2017, 17:37:36) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux Type "help", "copyright", "credits" or "license" for more information. import tkinter Traceback (most recent call last): File "", line 1, in File "/opt/python3/lib/python3.6/tkinter/init.py", line 36, in import _tkinter # If this fails your Python may not be configured for Tk ModuleNotFoundError: ****No module named '_tkinter'**** If I do in Python 2.6 it works: python Python 2.6.6 (r266:84292, Aug 18 2016, 15:13:37) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)] on linux2 Type "help", "copyright", "credits" or "license" for more information. import Tkinter PLEASE NOTE, I know that the module is lower case t in Python 3 so instead of import Tkinter, I am typing import tkinter. My question is: How do I install tkinter in Python 3 in CentOS. This I what have tried so far: yum install python3-tk Loaded plugins: fastestmirror, refresh-packagekit, security Loading mirror speeds from cached hostfile * base: mirror.us.leaseweb.net * extras: mirror.us.leaseweb.net * updates: mirror.us.leaseweb.net Setting up Install Process No package python3-tk available. Error: Nothing to do How do I install in CentOS 6 the module tkinter and make Python 3 able to use it? Thanks for any feedback.
How to install tkinter in python 3.6 in CentOS release 6.4
0
0
0
2,402
42,300,271
2017-02-17T14:23:00.000
2
0
0
0
python,google-drive-api
42,303,792
2
false
0
0
From your question it sounds like you are using a Service Account to proxy to a standard account. The first thing to do is to establish which account is out of quota, ie. is it the Service Account or is it the standard account? You can use the About.get method to see the used and available quota for each account. If it's the Service Account, it might be because the uploaded files are still owned by the Service Account. You might need to change their permission so they become owned by the standard account. The answer that @nicolas linked to is very helpful. If you are using a Service Account as a proxy, consider not doing this because it's a bit hacky. Instead you should consider uploading directly to the standard account using a saved Refresh Token. There are pros and cons of each approach.
1
0
0
I have created a service account for Google Drive API two months ago and was using it to upload files in weekly basics to a shared folder. From couple of days I am getting the below error while trying to upload files using this API "The user has exceeded their Drive storage quota" I tried to upload into another folder but still got the same issue. I am not sure if I am doing anything wrong here. Thanks,Teja
Google Drive API service Account "The user has exceeded their Drive storage quota" python
0.197375
0
1
4,699
42,301,719
2017-02-17T15:32:00.000
1
0
1
0
python,elasticsearch,elasticsearch-dsl,elasticsearch-py
42,313,679
1
true
0
0
So you just want to use the Search object's query, but not it's aggregations? In that case just call the object's search() method to get the Search object and go from there. If you want the aggregations, but just want to skip the python-level facets calculation just use the build_search method to get the raw Search object including the aggregations.
1
0
1
I have created my own customised FacetedSearch class using Pythons Elasticsearch DSL library to perform search with additional filtering in def search(self). Now I would like to reuse my class to do some statistical aggregations. To stay DRY I want to reuse this class and for performance reason I would like to temporarily disable facets calculation when they are not necessary while preserving all the filtering. So question is how can I temporarily omit facets in FacetedSearch search?
Temporarily disable facets in Python's FacetedSearch
1.2
0
0
103
42,304,636
2017-02-17T18:03:00.000
1
0
0
0
python,windows,tkinter,explorer
42,306,129
1
true
0
1
Import tkinter.filedialog with a statement such as from tkinter import filedialog (3.x) or import FileDialog as filedialog (2.x). This module is not properly documented in the CPython docs, nor anywhere else I know of, so read the code to determine which of the ask... methods you want to use. In 3.x, the code is Lib/tkinter/filedialog.py. In 2.x, Lib/libtk/FileDialog.py. EDIT: From what you said, you may want either askopenfilename or askopenfilenames. I believe these return names without opening the files. The functions without name actually open the files.
1
0
0
How would I 1. Access windows explorer 2. Use it to to gain the path to a file. For example when I click "open" it pastes the address of the selected file into an entry box I'm using tkinter if that helps at all. Thanks in advance.
Use Windows explorer to get the path to a file in python
1.2
0
0
742
42,305,145
2017-02-17T18:35:00.000
2
0
1
0
django,python-2.7,ms-word,labels
42,426,037
1
true
1
0
Not sure if anybody will find this helpful. I had an extremely tough time finding a way to print out mailing labels from my Django app. Instead I decided to export an spreadsheet using the xlwt library. Then you can use MS Word's Mail Merge functions to get Avery labels for each of your contacts.
1
0
0
I have a database full of families with addresses. I want to print out mailing labels for each family. I have various avery labels to use, is there an easy way to do this task? Is there a library or some tutorials you know of that others have used to accomplish this? I used a project that was ported to python 2.6 and used pyPDF to make a pdf with labels of specific dimensions, but I think it may be outdated. The labels printed don't line up. Do I just need to adjust these or is there an easier way to save the data and do a mail merge in Word? If there is not another way, I guess I'll just create a spreadsheet with the fields to import into Word.
Django print labels
1.2
0
0
754
42,307,980
2017-02-17T21:47:00.000
0
0
1
0
python,anaconda,conda
42,308,219
1
true
0
0
While I haven't used conda myself, I expect they aren't trying to change the concept of a virtual environment too much. That being said, I personally find it better to keep them separate, i.e. have a ~/.virtualenvs and a ~/repos folder. As you mentioned, though, it's pretty common to store both the virtualenv and the project itself in the same folder. What I would stress here is that the virtualenv should then be in the project folder, not the other way around. For example: ~/repos/Foo/.fooenv The reason for this is that virtualenvs should be disposable, whereas your projects are not. That means that you should be able to freely remove a virtualenv without fearing you've accidentally deleted your project folder along with it.
1
1
0
I am a beginner trying to learn a bit of Python; first practical applications will be data analytics. My learning setup consists of Mac OS X, Miniconda2, Pycharm and Git. Is it better to set up a project folder 'bar' within a conda environment folder 'foo' (~/miniconda2/env/foo/bar)? Or is it better to leave the conda environment alone as ~/miniconda2/env/foo and set up a project folder as ~/repos/bar? Virtualenv users I've seen put the env and the project in a single folder, but I have not seen a similar, popular or recommended workflow for conda. Thank you in advance for any advice.
Python project folder and its Miniconda2 environment folder
1.2
0
0
234
42,309,798
2017-02-18T00:48:00.000
0
0
0
0
python,tkinter,python-idle
42,333,708
1
true
0
1
Outline of possible solution: Create a 1-pixel wide Frame, with a contrasting background color, as a child of the Text. Use .place() to position the Frame at an appropriate horizontal coordinate. Possible issues: I don't see any easy way to get the coordinate for a particular column. Text.bbox("1.80") looks promising, but doesn't work unless there actually are 80 characters in the line already. You may have to insert a dummy 80-character line at first, call update_idletasks to get the Text to calculate positions, call bbox to get the coordinates, then delete the dummy text. Repeat whenever the display font is changed. The line would necessarily appear on top of any text or selection region, which isn't quite the visual appearance I'd expect for a feature like this.
1
0
0
My intention is to add a vertical bar to IDLE to indicate preferred line length at column 80. I have tried to find a configuration option for the Text tkinter widget that would allow this but have found nothing. I was hoping it would be a simple configuration option so I could just add a another item the text_options dictionary within EditorWindow.py found within Python\Lib\idlelib. I am not sure how styles/themes work but do they have the capability to change the background colour of only 1 column in a Text widget?
Adding a vertical bar or other marker to tkinter Text widgets at a particular column
1.2
0
0
395
42,311,100
2017-02-18T04:35:00.000
17
0
0
1
apache-kafka,kafka-consumer-api,kafka-producer-api,kafka-python
42,328,553
1
true
0
0
Apache Kafka uses Log data structure to manage its messages. Log data structure is basically an ordered set of Segments whereas a Segment is a collection of messages. Apache Kafka provides retention at Segment level instead of at Message level. Hence, Kafka keeps on removing Segments from its end as these violate retention policies. Apache Kafka provides us with the following retention policies - Time Based Retention Under this policy, we configure the maximum time a Segment (hence messages) can live for. Once a Segment has spanned configured retention time, it is marked for deletion or compaction depending on configured cleanup policy. Default retention time for Segments is 7 days. Here are the parameters (in decreasing order of priority) that you can set in your Kafka broker properties file: Configures retention time in milliseconds log.retention.ms=1680000 Used if log.retention.ms is not set log.retention.minutes=1680 Used if log.retention.minutes is not set log.retention.hours=168 Size based Retention In this policy, we configure the maximum size of a Log data structure for a Topic partition. Once Log size reaches this size, it starts removing Segments from its end. This policy is not popular as this does not provide good visibility about message expiry. However it can come handy in a scenario where we need to control the size of a Log due to limited disk space. Here are the parameters that you can set in your Kafka broker properties file: Configures maximum size of a Log log.retention.bytes=104857600 So according to your use case you should configure log.retention.bytes so that your disk should not get full.
1
11
0
I am fairly new to kafka so forgive me if this question is trivial. I have a very simple setup for purposes of timing tests as follows: Machine A -> writes to topic 1 (Broker) -> Machine B reads from topic 1 Machine B -> writes message just read to topic 2 (Broker) -> Machine A reads from topic 2 Now I am sending messages of roughly 1400 bytes in an infinite loop filling up the space on my small broker very quickly. I'm experimenting with setting different values for log.retention.ms, log.retention.bytes, log.segment.bytes and log.segment.delete.delay.ms. First I set all of the values to the minimum allowed, but it seemed this degraded performance, then I set them to the maximum my broker could take before being completely full, but again the performance degrades when a deletion occurs. Is there a best practice for setting these values to get the absolute minimum delay? Thanks for the help!
Kafka optimal retention and deletion policy
1.2
0
0
19,180
42,311,985
2017-02-18T06:38:00.000
0
0
1
0
python
42,312,009
3
false
0
0
Yes, it is possible. You could write the text to a file through the normal process and then run the execfile command. There are probably better ways to do it, but that one just came to mind.
2
0
0
Could this be a possibility? Could you develop a program to write a program for you inside of the already existing program? Then save the program the program has written to a .py file and execute it as if it where the user?
Python program to write other programs
0
0
0
45
42,311,985
2017-02-18T06:38:00.000
0
0
1
0
python
42,312,004
3
false
0
0
Of course, but to create such program, you need to have excellent knowledge on programming and on many more things.
2
0
0
Could this be a possibility? Could you develop a program to write a program for you inside of the already existing program? Then save the program the program has written to a .py file and execute it as if it where the user?
Python program to write other programs
0
0
0
45
42,313,604
2017-02-18T09:50:00.000
0
0
1
0
python
42,313,657
1
true
0
0
Apparently I was running the code from the wrong directory, so I guess the context of the running environment couldn't find the directory tools I was at the parent directory and ran the code like this python main\run.py Then, the interpreter looked for tools at the parent directory of of the project. So I cd main and ran the code python run.py and it worked (because it looked for the tools dir in the project directory)
1
0
0
I am trying to import from a relative path using sys.path.append My directories look like that: /main --run.py /tools --tool.py at main.py I'm having this code for importing tool.py: sys.path.append("../tools/") from tool import myFunc but when I run the code, get thin error: ImportError: No module named tool
Python import from relative path error
1.2
0
0
87
42,313,776
2017-02-18T10:09:00.000
5
0
1
1
python-3.x,nltk
43,109,101
6
false
0
0
I had the same problem as you, but I accidentally found pip.exe in my python directory, so I navigated to said directory with CMD and ran the command pip install -U nltk and it worked.
4
3
0
I'm new to python, I'm using Windows 10 and have python36 and I basically have to use nltk for my project and i basically have two questions. 1 I heard pip is automatically downloaded for versions 3+ but when I type pip install nltk in command prompt im getting the following error even though i added its path "C:\Users\dheeraj\AppData\Local\Programs\Python\Python36\Scripts\pip36" in advanced settings and ya in above path i tried pip36 and pip in both cases result is same. 'pip' is not recognized as an internal or external command," 2 In www.nltk.org I found nltk for mac, unix and windows32 but not for windows64 ,does that mean it doesnt support for 64bit or is there any way for me to install nltk.
nltk for python 3.6 in windows64
0.16514
0
0
21,144
42,313,776
2017-02-18T10:09:00.000
3
0
1
1
python-3.x,nltk
43,700,474
6
false
0
0
Run the Python interpreter and type the commands: import nltk>>> nltk.download()>>> A new window should open, showing the NLTK Downloader. Click on the File menu and select Change Download Directory. For central installation, set this to C:\nltk_data (Windows), /usr/local/share/nltk_data(Mac), or /usr/share/nltk_data (Unix). Next, select the packages or collections you want to download. If you did not install the data to one of the above central locations, you will need to set the NLTK_DATA environment variable to specify the location of the data. (On a Windows machine, right click on “My Computer” then select Properties > Advanced > Environment Variables > User Variables > New...) Test that the data has been installed as follows. (This assumes you downloaded the Brown Corpus): from nltk.corpus import brown>>> brown.words()>>> ['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', ...] Installing via a proxy web server If your web connection uses a proxy server, you should specify the proxy address as follows. In the case of an authenticating proxy, specify a username and password. If the proxy is set to None then this function will attempt to detect the system proxy. nltk.set_proxy('http://proxy.example.com:3128', ('USERNAME', 'PASSWORD'))>>> >>> nltk.download() Command line installation The downloader will search for an existing nltk_data directory to install NLTK data. If one does not exist it will attempt to create one in a central location (when using an administrator account) or otherwise in the user’s filespace. If necessary, run the download command from an administrator account, or using sudo. The recommended system location is C:\nltk_data (Windows); /usr/local/share/nltk_data (Mac); and/usr/share/nltk_data `(Unix). You can use the -d flag to specify a different location (but if you do this, be sure to set the NLTK_DATA environment variable accordingly). Run the command python -m nltk.downloader all. To ensure central installation, run the command sudo python -m nltk.downloader -d /usr/local/share/nltk_data all. Windows: Use the “Run...” option on the Start menu. Windows Vista users need to first turn on this option, using Start -> Properties -> Customize to check the box to activate the “Run...” option.
4
3
0
I'm new to python, I'm using Windows 10 and have python36 and I basically have to use nltk for my project and i basically have two questions. 1 I heard pip is automatically downloaded for versions 3+ but when I type pip install nltk in command prompt im getting the following error even though i added its path "C:\Users\dheeraj\AppData\Local\Programs\Python\Python36\Scripts\pip36" in advanced settings and ya in above path i tried pip36 and pip in both cases result is same. 'pip' is not recognized as an internal or external command," 2 In www.nltk.org I found nltk for mac, unix and windows32 but not for windows64 ,does that mean it doesnt support for 64bit or is there any way for me to install nltk.
nltk for python 3.6 in windows64
0.099668
0
0
21,144
42,313,776
2017-02-18T10:09:00.000
1
0
1
1
python-3.x,nltk
48,652,887
6
false
0
0
I will recommend you to use the Anaconda on Windows. Anaconda has nltk version for Python 64-bit. Now I'm using Python 3.6.4 64-bit and nltk. Under python shell run: import nltk nltk.download() then the downloader will be open in new window and you can download what you want.
4
3
0
I'm new to python, I'm using Windows 10 and have python36 and I basically have to use nltk for my project and i basically have two questions. 1 I heard pip is automatically downloaded for versions 3+ but when I type pip install nltk in command prompt im getting the following error even though i added its path "C:\Users\dheeraj\AppData\Local\Programs\Python\Python36\Scripts\pip36" in advanced settings and ya in above path i tried pip36 and pip in both cases result is same. 'pip' is not recognized as an internal or external command," 2 In www.nltk.org I found nltk for mac, unix and windows32 but not for windows64 ,does that mean it doesnt support for 64bit or is there any way for me to install nltk.
nltk for python 3.6 in windows64
0.033321
0
0
21,144
42,313,776
2017-02-18T10:09:00.000
3
0
1
1
python-3.x,nltk
46,702,612
6
false
0
0
Directly Search for pip folder and navigate throught that path example: C:\Users\PAVAN\Environments\my_env\Lib\site-packages\pip> Run cmd and then run the command pip install -U nltk
4
3
0
I'm new to python, I'm using Windows 10 and have python36 and I basically have to use nltk for my project and i basically have two questions. 1 I heard pip is automatically downloaded for versions 3+ but when I type pip install nltk in command prompt im getting the following error even though i added its path "C:\Users\dheeraj\AppData\Local\Programs\Python\Python36\Scripts\pip36" in advanced settings and ya in above path i tried pip36 and pip in both cases result is same. 'pip' is not recognized as an internal or external command," 2 In www.nltk.org I found nltk for mac, unix and windows32 but not for windows64 ,does that mean it doesnt support for 64bit or is there any way for me to install nltk.
nltk for python 3.6 in windows64
0.099668
0
0
21,144
42,313,951
2017-02-18T10:30:00.000
0
0
1
0
python,video,ffmpeg,moviepy
50,184,791
2
false
1
0
Since modern video coding standards uses inter frame prediction, general solution of removing frame without re-encoding is not exist. (Removing reference picture breaks inter prediction, so only non-reference picture can be removed.)
1
0
0
I understand this might be a trivial question, but so far I had no luck with the various solutions I tried and I'm sure there must be a convenient way to achieve this. How would I proceed removing frames/milliseconds from a video file without slicing, merging and re-encoding? All solutions I found involved exporting various times to various formats, and I'm hopeful there will be no need to do so. With ffmpeg/avconv it's necessary to convert the temporary streams to .ts, then concatenate, and finally re-encode in the original format. Python library MoviePy seemed to do quite exactly what I needed but: The cutout function returns a file which can not be exported, as the write_videofile function tries and fails to fetch the removed frames If I instead slice the original file into various clips and then merge them with concatenate_videoclips, the export doesn't fail but takes twice the length of the video. The resulting video has then a faster frame-rate, with only the cue points for the concatenated videos being timely placed, and audio playing at normal speed. It's also worth noting that the output file, despite being 5-7% shorter in duration, is about 15% bigger. Is there any other library I'm not aware of I might look into? What I'm trying to imagine is an abstraction layer providing easy access to the most common video formats, giving me the ability to pop the unwanted frames and update the file headers without delving into the specifics of each and every format.
python/video - removing frames without re-encoding
0
0
0
1,097
42,315,499
2017-02-18T13:03:00.000
-1
0
1
0
python
42,316,072
2
false
0
0
That depends on the data you want to store in each object, but in most cases lists should do.
1
0
0
It appears to me that each instance of a particular class has its own dictionary. This could waste a lot of space when there is a large number of identically structured class objects. Is this actually the case, or is the underlying mechanism more efficient, only creating an object's dictionary when it is explicitly asked for. I am considering an application where I may have a very large number, possibly into millions, of objects, should I avoid using a class and instead use a sequence with a named constant as the index?
Data efficiency of class objects
-0.099668
0
0
75
42,316,431
2017-02-18T14:30:00.000
1
0
0
0
python,scikit-learn,gensim,svd
42,317,702
1
true
0
0
I don't really see why using sparks mllib SVD would improve performance or avoid memory errors. You simply exceed the size of your RAM. You have some options to deal with that: Reduce the dictionary size of your tf-idf (playing with max_df and min_df parameters of scikit-learn for example). Use a hashing vectorizer instead of tf-idf. Get more RAM (but at some point tf-idf + SVD is not scalable). Also you should show your code sample, you might do something wrong in your python code.
1
1
1
I am trying to classify paragraphs based on their sentiments. I have training data of 600 thousand documents. When I convert them to Tf-Idf vector space with words as analyzer and ngram range as 1-2 there are almost 6 million features. So I have to do Singular value decomposition (SVD) to reduce features. I have tried gensim and sklearn's SVD feature. Both work fine for feature reduction till 100 but as soon as I try for 200 features they throw memory error. Also I have not used entire document (600 thousand) as training data, I have taken 50000 documents only. So essentially my training matrix is: 50000 * 6 million and want to reduce it to 50000 * (100 to 500) Is there any other way I can implement it in python, or do I have to implement sparks mllib SVD(written for only java and scala) ? If Yes, how much faster will it be? System specification: 32 Gb RAM with 4 core processors on ubuntu 14.04
SVD using Scikit-Learn and Gensim with 6 million features
1.2
0
0
961
42,317,817
2017-02-18T16:40:00.000
0
0
0
0
python,node.js,dronekit
42,536,435
1
false
0
0
I believe you can build your GUI in nodejs and use UDP or websocket to communicate with python code, where you build a dronekit wrapped with UDP or websocket.
1
0
0
I'm trying to use Dronekit Python 2 for creating a minimalistic GCS (ground station). From the examples it looks like python scripts always finish and the connection with the vehicle is lost. That said, Is there any way to code a python script that works like a thread and only exit once it get's a command from Nodejs? Nodejs has the python-shell module that is supposed to send messages to python via STDIN. So my goal is to run python script from Nodejs python-shell, and then send commands to dronekit (connect, arm, takeoff, etc). Thanks for your help!
Dronekit Python - Is it possible to send commands from Nodejs?
0
0
0
314
42,317,953
2017-02-18T16:54:00.000
-1
0
0
0
python,tensorflow,mnist
47,821,847
1
false
0
0
Found the answer i guess... one hot=True transformed the scalar into a one hot vector :) thanks for your time anyway!
1
0
1
currently i am looking for a way to filter specific classes out of my training dataset (MNIST) to train a neural network on different constellations, e.g. train a network only on classes 4,5,6 then train it on 0,1,2,3,4,5,6,7,8,9 to evaluate the results with the test dataset. I'd like to do it with an argument parser via console to chose which classes should be in my training dataset so i can split this into mini batches. I think i could do it with sorting out via labels but i am kinda stuck at this moment... would appreciate any tip!!! Greetings, Alex
Select specific MNIST classes to train a neural network in TensorFlow
-0.197375
0
0
1,514
42,319,189
2017-02-18T18:40:00.000
0
0
0
0
python,django,django-models
42,319,231
2
false
1
0
No you can't. The length of string that can be accepted relies on the structure of your database -> you'd have to migrate your database on every max_length change! The solution to your problem would be to simply measure a potential maximum size of variable you'd like to save (here I think about some dev environment). Then set max_length accordingly.
1
1
0
I have one Django model field definition like this: landMark = models.CharField(max_length=50, blank=True) Can I define the landMark field dynamically so that it is capable of holding a string which has a varying size ?
Dynamic length Django model field
0
0
0
1,722
42,319,327
2017-02-18T18:52:00.000
1
0
1
0
python,pycharm,pillow,conda
42,324,821
1
false
0
0
Invalidating the caches and restarting would work. If you want to refresh the Pycharm cache, try going to the far left of PyCharm, and choose [File|Invalidate Caches/Restart...]
1
0
0
I'm using Python 3.5 and I was trying to download and use Pillow 4.0.0. I got it through Conda, and it shows in its package menu, as well as the module list in Pycharm. However, even when I have the project interpreter set to anaconda, it will not recognize Pillow at all. I've also given it some time to scan through everything, to see if that would work.
Pycharm won't import Conda module
0.197375
0
0
352
42,320,197
2017-02-18T20:15:00.000
1
0
1
1
python,windows,scrapy,pypi
45,332,173
3
true
1
0
Use pip3 insttead of pip since you are using python3
1
1
0
I got pip install scrapy in cmd, it said Collecting scrapy and after a few seconds I got the following error: Command "c:\python35\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\DELL\\AppData\\Local\\Temp\\pip-build-2nfj5t60\\Twisted\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\DELL\AppData\Local\Temp\pip-0bjk1w93-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\DELL\AppData\Local\Temp\pip-build-2nfj5t60\Twisted\ I am not able to get the error.
Not able to install scrapy in my windows 10 x64 machine
1.2
0
0
1,285
42,320,994
2017-02-18T21:41:00.000
0
0
0
1
python,centos6,systemd,fail2ban
49,656,117
3
false
0
0
I was able to fix this by editing the paths-common.conf file from: default_backend = %(default/backend)s to: default_backend = pynotify or default_backend = auto
1
2
0
When enabled sshd jail i see Starting fail2ban: ERROR NOK: ("Failed to initialize any backend for Jail 'sshd'",) ERROR NOK: ('sshd',) In logs : ERROR Backend 'systemd' failed to initialize due to No module named systemd ERROR Failed to initialize any backend for Jail 'sshd' Centos 6.7 no have systemd module . CentOS 6.7, python 2.6
Cant enable fail2ban jail sshd
0
0
0
4,535
42,324,425
2017-02-19T06:40:00.000
10
0
1
0
python,windows,python-2.7,package,anaconda
42,325,553
1
true
0
0
If you uninstall from the control panel, it should remove all packages with it. To ensure that your path doesn't contain your old python when you try and use anaconda, you should remove Python from your path. In windows 10: From desktop go bottom left and find the menu. Click system, then Advanced System Settings In this window, go to the Advanced tab and click on the environment variables button. From there you can edit your Path, with the edit button. Make sure there is no reference to Python here. Also, all variables are separated by a ; so make sure all syntax is good before saving. Install anaconda and at the end of the install it should ask if you want to make it the default Python. Say yes and every time you or another program asks for Python, it will get pointed to anaconda.
1
6
0
I wish to uninstall Python 2.7 and all packages connected to it. I initially installed Python from the official website and I installed all packages using the pip install command. Would uninstalling Python from the control panel also uninstall all packages automatically? The reason I want to uninstall Python is because I want to use Anaconda in order to be able to manage packages more easily and also be able to install both Python 2 and 3 to switch between them back and forth.
How to uninstall Python and all packages
1.2
0
0
46,177
42,326,377
2017-02-19T10:58:00.000
0
0
1
0
python,python-2.7,pip,praw
42,326,418
3
false
0
0
It says it can't write to '/Library/Python/2.7/site-packages/pip'. Adapt read-write rights on that folder or try sudo pip install praw
1
0
0
I want to install praw with pip install praw command but before I want to install pip, couldn't manage to do it. Collecting pip Using cached pip-9.0.1-py2.py3-none-any.whl Collecting wheel Using cached wheel-0.29.0-py2.py3-none-any.whl Installing collected packages: pip, wheel Exception: Traceback (most recent call last): File "/var/folders/vv/v2drs0vd7jz6wlywr02cr3480000gn/T/tmpDdYH0a/pip.zip/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/var/folders/vv/v2drs0vd7jz6wlywr02cr3480000gn/T/tmpDdYH0a/pip.zip/pip/commands/install.py", line 342, in run prefix=options.prefix_path, File "/var/folders/vv/v2drs0vd7jz6wlywr02cr3480000gn/T/tmpDdYH0a/pip.zip/pip/req/req_set.py", line 784, in install **kwargs File "/var/folders/vv/v2drs0vd7jz6wlywr02cr3480000gn/T/tmpDdYH0a/pip.zip/pip/req/req_install.py", line 851, in install self.move_wheel_files(self.source_dir, root=root, prefix=prefix) File "/var/folders/vv/v2drs0vd7jz6wlywr02cr3480000gn/T/tmpDdYH0a/pip.zip/pip/req/req_install.py", line 1064, in move_wheel_files isolated=self.isolated, File "/var/folders/vv/v2drs0vd7jz6wlywr02cr3480000gn/T/tmpDdYH0a/pip.zip/pip/wheel.py", line 345, in move_wheel_files clobber(source, lib_dir, True) File "/var/folders/vv/v2drs0vd7jz6wlywr02cr3480000gn/T/tmpDdYH0a/pip.zip/pip/wheel.py", line 316, in clobber ensure_dir(destdir) File "/var/folders/vv/v2drs0vd7jz6wlywr02cr3480000gn/T/tmpDdYH0a/pip.zip/pip/utils/init.py", line 83, in ensure_dir os.makedirs(path) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/pip'
Getting an exception while trying to install pip
0
0
0
1,185
42,327,847
2017-02-19T13:34:00.000
0
0
1
0
python,macos,jupyter-notebook,lag
42,332,364
3
false
0
0
Perhaps a change in the browser. Google Chrome is advisable.
3
0
0
I've created a few notebooks on a previous Mac. Since then, I've changed my Mac and reinstalled the new one from scratch, with the latest version of Anaconda (Python 2.7). When I try to open these old notebooks, they are very long to open. Sometimes, it really takes 2-3 minutes in Safari. It sucks on loading MathJax/extensions/Safe.js. Then, afterwards, it lags on typing : a few milliseconds between the keyboard action and the letters appearing in the cell. I've been searching extensively on the web for this issue, but didn't find anything. Uninstalling Anaconda and reinstalling again doesn't solve the issue. Can it come from the fact that the Anaconda version I used to create the notebooks is probably not the same as the one I've installed on my new Mac ? Thanks to all for your answer.
Jupyter Notebook - Notebooks created on another computer are very slow to open and lag on typing
0
0
0
524
42,327,847
2017-02-19T13:34:00.000
0
0
1
0
python,macos,jupyter-notebook,lag
42,332,783
3
false
0
0
I have an old Mac and noticed something while using jupyter. Check your Activity monitor. If CPU is working like hell even tough you are not doing anything because of the process "VTDecoderXPCService" do this Restart everything: Right click quit on your browser in the dock ctrl+c in your terminal where the notebook launched and restart again with "jupyter notebook" I and I guess most Mac users don't shut down or really quit applications. It's just a guess but maybe you have the same problem. I don't know yet what is causing it but the macbook get's hot, a bit laggy and jupyter takes 2-3x as long to open a notebook.
3
0
0
I've created a few notebooks on a previous Mac. Since then, I've changed my Mac and reinstalled the new one from scratch, with the latest version of Anaconda (Python 2.7). When I try to open these old notebooks, they are very long to open. Sometimes, it really takes 2-3 minutes in Safari. It sucks on loading MathJax/extensions/Safe.js. Then, afterwards, it lags on typing : a few milliseconds between the keyboard action and the letters appearing in the cell. I've been searching extensively on the web for this issue, but didn't find anything. Uninstalling Anaconda and reinstalling again doesn't solve the issue. Can it come from the fact that the Anaconda version I used to create the notebooks is probably not the same as the one I've installed on my new Mac ? Thanks to all for your answer.
Jupyter Notebook - Notebooks created on another computer are very slow to open and lag on typing
0
0
0
524
42,327,847
2017-02-19T13:34:00.000
0
0
1
0
python,macos,jupyter-notebook,lag
48,421,252
3
false
0
0
Try Firefox. I think it is faster than Chrome for opening and viewing large notebook.
3
0
0
I've created a few notebooks on a previous Mac. Since then, I've changed my Mac and reinstalled the new one from scratch, with the latest version of Anaconda (Python 2.7). When I try to open these old notebooks, they are very long to open. Sometimes, it really takes 2-3 minutes in Safari. It sucks on loading MathJax/extensions/Safe.js. Then, afterwards, it lags on typing : a few milliseconds between the keyboard action and the letters appearing in the cell. I've been searching extensively on the web for this issue, but didn't find anything. Uninstalling Anaconda and reinstalling again doesn't solve the issue. Can it come from the fact that the Anaconda version I used to create the notebooks is probably not the same as the one I've installed on my new Mac ? Thanks to all for your answer.
Jupyter Notebook - Notebooks created on another computer are very slow to open and lag on typing
0
0
0
524
42,328,024
2017-02-19T13:50:00.000
1
0
0
1
python
42,328,184
2
false
0
0
Expanding on my comment Is there a way to do that in python? I think the short answer is: No. Not in Python, not in another language. I want to write a python script that detects the backup drive I don't think there's a way to do this. There's nothing inherent to a drive that could be used to detect whether a connected drive is intended by the user to be a "backup" drive or for something else. In other words, whether a drive is a "backup" drive or not is determined by the user's behavior, not by the properties of the drive itself. There're flags that can be set when a drive gets formatted (e.g. whether it's a bootable drive or not, etc), but that's about it. If we're talking about a method that's intended for your personal use only, then something that might work is the following: Create a naming convention for your drives (i.e. their labels when formatting), such as making sure your backup drives have the word "backup" somewhere in it; Make sure you never deviate from this naming convention; Write a program that will iterate over your drives looking for the word "backup" in their names (a simple regular expression would work). Obviously, this would only work as long as the convention is followed. This is not a solution that you can arbitrarily apply in other situations where this assumption does not hold. makes sure it is an external disk not thumbdrive or dvdrom. This one might be tricky. If you connect an external HDD into a USB plug, the system would know the drive's capacity and the fact that it's connected through the USB interface, but I think that's about it.
1
0
0
I have 2 drives, OS drive and the backup drive in windows. I want to write a python script that detects the backup drive and returns the letter it is assigned and makes sure it is an external disk not thumbdrive or dvdrom. The letter assigned to the drive can vary. Is there a way to do that in python? I have been searching through but to no avail.
How to detect backup drive in windows using python?
0.099668
0
0
130
42,329,195
2017-02-19T15:41:00.000
0
0
1
0
python,jython,division,integer-division,jes
42,329,222
2
false
0
0
use the float() function. 5/2=2 float(5)/float(2)=2.5
1
0
0
So I'm trying to input a division problem as a parameter, and I am always getting 0 when the answer is < 1. I know why this is happening but I'm not sure how to fix this so it gives a decimal answer. def division(x): print x Where "x" is 1/2 and the code prints ".5" I'm using JES version 5.020. Any guidance would be much appreciated!
Division as input parameter returning 0
0
0
0
665
42,330,006
2017-02-19T16:51:00.000
1
1
1
0
dronekit-python,dronekit
42,332,432
1
false
0
0
While your vehicle is in any mode other than GUIDED, your dronekit script will not be able to control behaviour. However, the script can change the mode of the copter to GUIDED, send some commands, and then set the mode back to the previous mode when it is done. The example you gave of using lidar for obstacle avoidance is already a feature in progress, being built directly into normal flight modes. Perhaps it's not documented well enough yet, but maybe try digging into the code to see how it works.
1
1
0
I have successfully tested basic DroneKit scripts on a companion computer (Raspberry Pi) to achieve autonomous flight on Pixhawk controlled 3DR ArduCopter. The RPi is also connected to various sensors, and crunches that data in real time in the same python script- to influence the flight. Is it possible to pilot the drone manually with a Taranis as usual, while RPi (with DroneKit running) remains connected to Pixhawk and overrides the radio when needed? For example, a background prevention mechanism that takes control and moves the copter away if the pilot is about to crash into a wall (which is easily sensed using a LIDAR). Thank you!
Is it possible to control Pixhawk quadcopter with a Taranis and a DroneKit script simultaneously?
0.197375
0
0
266
42,330,815
2017-02-19T18:00:00.000
0
0
1
0
python,visual-studio-code,maya
42,336,085
1
false
0
0
Use "Attach to process" in Visual Studio and select "Maya.exe"
1
0
0
I was trying to debug my own plugins for Maya in Visual Studio Code, but i can't get it! I got the Script Editor connected with the VSCode console, but the code, never stop at the code. Someone also tried this and get it?? Thanks all!
Autodesk Maya debugging with Visual Studio Code (VSCode)
0
0
0
1,603
42,335,603
2017-02-20T02:37:00.000
6
0
1
0
python
42,336,498
4
false
0
0
There's no established convention, but I think that code is easier to read if the main logic is near the top of the file. I typically will define a main at the top, and then call it from the very bottom.
3
7
0
I'm a beginner programmer and would like to know where to place my main function in python. I tried googling it but could not find any specific results. I don't think it really matters when you run the program but I was wondering if there was a proper format. Thanks.
Should the main function and main() be placed at the start or the end of the program?
1
0
0
5,136
42,335,603
2017-02-20T02:37:00.000
1
0
1
0
python
64,987,615
4
false
0
0
In the end is the convention. Possible dependencies that are called in main and defined below main won't work.
3
7
0
I'm a beginner programmer and would like to know where to place my main function in python. I tried googling it but could not find any specific results. I don't think it really matters when you run the program but I was wondering if there was a proper format. Thanks.
Should the main function and main() be placed at the start or the end of the program?
0.049958
0
0
5,136
42,335,603
2017-02-20T02:37:00.000
2
0
1
0
python
42,335,679
4
false
0
0
No, there's no established format for this.
3
7
0
I'm a beginner programmer and would like to know where to place my main function in python. I tried googling it but could not find any specific results. I don't think it really matters when you run the program but I was wondering if there was a proper format. Thanks.
Should the main function and main() be placed at the start or the end of the program?
0.099668
0
0
5,136
42,338,318
2017-02-20T07:01:00.000
1
0
1
0
python,windows,upgrade
42,338,948
1
true
0
0
In Python most major versions are released as separate package. For instance, Python 2.6, Python 2.7, Python 3.1 all live in separate packages on Ubuntu. You'll have to install 3.6.0 as a separate package.
1
0
0
Is there a way to upgrade Pyhton 3.x to the newest stable 3.x version on Windows, if the current version was installed using msi standalone installer? In my case I'm trying to upgrade 3.5.3 to 3.6.0
Upgrade Python 3.x to the newest stable version?
1.2
0
0
1,581
42,339,034
2017-02-20T07:46:00.000
0
0
1
1
python,debian
42,436,834
3
true
0
0
From what I learnt from IRC is that I should install the modules in 'dist-packages' only, assuming that the admin would have installed the python provided by Ubuntu repo only.
2
2
0
I am building a deb package from source. The source used to install the modules in 'site-packages' in RHEL. On Ubuntu, 'site-packages' doesn't work for me. Searching over the net, it says that python Ubuntu would require it in 'dist-packages' But there are also references that python built from source would look in 'site-packages' Now I am confused, where should my deb packages install the modules so that it works irrespective of python built from source or installed from Ubuntu repo
Python module in 'dist-packages' vs. 'site-packages'
1.2
0
0
5,617
42,339,034
2017-02-20T07:46:00.000
11
0
1
1
python,debian
46,771,967
3
false
0
0
dist-packages is a Debian convention that is present in distros based on Debian. When we install a package using the package manager like apt-get these packages are installed to dist-packages. Likewise, if you install using pip and pip is installed via package manager then these packages will be installed in dist-packages. If you build python from source then pip comes with it, now if you install a package using this pip it'll be installed into site-packages. So It depends on which python binary you are using if you are using the binary that comes from package manager it will search in dist-packages and if you are using a binary from manual install it'll search in site-packages.
2
2
0
I am building a deb package from source. The source used to install the modules in 'site-packages' in RHEL. On Ubuntu, 'site-packages' doesn't work for me. Searching over the net, it says that python Ubuntu would require it in 'dist-packages' But there are also references that python built from source would look in 'site-packages' Now I am confused, where should my deb packages install the modules so that it works irrespective of python built from source or installed from Ubuntu repo
Python module in 'dist-packages' vs. 'site-packages'
1
0
0
5,617
42,339,728
2017-02-20T08:33:00.000
1
1
0
1
python,jenkins
42,340,973
1
false
0
0
Jenkins is running your jobs as a different user, and typically on a different host (unless you let your Jenkins run on your local host and don't use slaves to run your jobs). Resulting from these two aspects you will have also a different environment (variables like HOME, PATH, PYTHONPATH, and all the other environment stuff like locales etc.). To find out the host, let a shell in the job execute hostname. To find out the Jenkins user, let a shell in the job execute id. To find out the environment, let a shell in the job execute set (which will produce a lot of output). My guess would be that in your case the modules you are trying to use are not installed on the Jenkins host.
1
0
0
I did notice something strange on my python server running jenkins. Basically if I run a script, which has dependencies (I use python via Brew), from console, it works fine. But when I run it via Jenkins, I get an error because that package was not found. When I call the script, I use python -m py.test -s myscript.py Is there a gotcha when using Jenkins, and call python as I do? I would expect that a command called in the bash section of Jenkins, would execute as if it was running in console, but from the result that I get, it seems that is not true. When I check for which python, I get back /usr/local/bin/python; which has the symlink to the brew version. If I echo $PYTHONPATH I get back the same path. One interesting thing though, is that if on Jenkins I call explicitly either /usr/local/bin/python -m or /usr/bin/python ; I get an error saying that there is no python there; but if I just use python -m, it works. This makes no sense to me.
is Python called differently via Jenkins?
0.197375
0
0
859
42,339,941
2017-02-20T08:47:00.000
1
0
0
0
python,cntk
42,521,000
1
true
0
0
You can create two minibatch sources, one for x and one for x_mask, both with randomize=False. Then the examples will be read in the order in which they are listed in the two map files. So as long as the map files are correct and the minibatch sizes are the same for both sources you will get the images and the masks in the order you want.
1
1
1
does anyone know how to create or use 2 minibatch sources or inputs a sorted way? My problem is the following: I have images named from 0 to 5000 and images named 0_mask to 5000_mask. For each image x the coressponding image x_mask is the regression image for a deconvolution output. So i need a way to tell cntk that each x corresponds to x_match and that there is no regression done between x and y_mask. I'm well aware of the cntk convolution sample. I've seen it. The problem are the two input streams with x and x_mask. Can i combine them and make the reference, i need it in an easy way? Thank you in advance.
CNTK 2 sorted minibatch sources
1.2
0
0
65
42,342,082
2017-02-20T10:28:00.000
0
0
1
0
python,pyinstaller,.so
42,343,120
1
false
0
0
You need to make sure that your . so file is in your system library path as well as your python library path. Setting your LD path is a quick fix. PyInstaller will need it in your system path. Depending on your Windows version it's Environment variables>PATH.
1
0
0
A python script uses a .so file. I made an exe for this python script using PyInstaller. But when I execute the generated exe, it is unable to locate this .so file So how to link this .so to a python code that will get converted to a .exe Note: when running the .py program, if I set the location of .so in LD_LIBRARY_PATH, it executes the program correctly but once I make the exe using pyinstaller script.py --onefile (with .so in the same directory), it doesnt find the .so file.. Thank you in advance
PyInstaller: Can't find .so module when exe is executed
0
0
0
752
42,344,095
2017-02-20T12:03:00.000
1
0
0
0
python,ibm-cloud,ibm-watson,chatbot,watson-conversation
42,350,581
1
true
0
0
Search for a "request response". This is a way to redirect the conversation / dialog flow to your app, and then forward it back to watson. Hope it helps.
1
1
0
I am working on a chat-box. I am using IBM Watson Conversation to make a conversation bot. My query is: Suppose the user is talking to the bot in some specific node, and suddenly the user asks a random question, like "what is the weather?", my bot should be able to connect to few Internet websites, search the content and come with a relevant reply, and after that as the user inputs to get back to the previous topic, the bot should be able to resume from where it left. Summary: How to code in Python to make the bot jump to some intent and then get back to previous intent. Thanks!
IBM Watson Conversation - Python: Make chat bot jump to some intent & get back to previous intent
1.2
0
1
1,028
42,346,047
2017-02-20T13:37:00.000
3
0
1
0
python,pip,conda,nexus,pypi
42,379,584
1
false
0
0
We do not support Conda packages at current time. I myself have never tried it, and I suspect it would not work to try using a PyPi hosted repo, etc... for Conda packages.
1
7
0
My company uses Nexus repository as npm proxy for package management. Does anyone have experience using Nexus to hold Conda packages (Python) and for proxy? In the Nexus documentation, it clearly says that the Nexus supports the PyPI repository, but does it also support Conda repositories?
conda package on Nexus Repository
0.53705
0
0
3,823
42,348,514
2017-02-20T15:33:00.000
2
0
0
0
python,django,database-migration
42,348,573
1
true
1
0
If an app doesn't have a migrations/ directory with an __init__.py file (even on Python 3), Django won't create any migrations if you don't specify the app name. You either have to create the directories and __init__.py files, or you have to call ./manage.py makemigrations <app_name> for each app, which forces Django to create the directory. It doesn't. It may connect to the database during the makemigrations command, but the migrations are purely based on your models.
1
1
0
I have 2 related questions. I have deleted all my migrations, including the migrations directory, from all the apps in my project. I've started with a clean database. But still, when I run ./manage.py makemigrations, django says there are no changes to be made. How do I completely start over with migrations? No squashing, just starting over. It seems that when I call makemigrations, django consults the database. I'd like my codebase to be the only source of truth for my migrations. Is there a way to do this?
Start over with django migrations
1.2
0
0
678
42,349,418
2017-02-20T16:17:00.000
0
0
1
0
python,pycharm,ide
42,440,420
2
false
1
0
Thanks for the answer guys! Apparently the problem was I once set a vm ram size (also in system environment) which limits Pycharm to start.
1
1
0
I'm new to python community and intends to use PyCharm for django framework. However, after installing PyCharm (32 bit launcher same as my OS), the software can't be launched. After double-click the pycharm icon nothing happens, also tried on pycharm.exe, pycharm.bat, and pycharm64.exe, they all failed to start. Have tried to search about the system requirement and I believe everything fits. My OS is 32x bit, 4gb RAM and 100+gb storage available. Hope anyone could give me hints about this issue. Thank you!
PyCharm doesn't start up
0
0
0
2,139
42,349,732
2017-02-20T16:33:00.000
0
0
1
0
python
42,349,819
1
false
0
0
You can use pdb.set_trace() at the end of the script in order to enter into interactive debug mode. It is a bit of an abuse of the module, but it will let you write code and manipulate the current scope
1
0
0
I'm not sure if there is a way to do this. I use TextWrangle to write my Python script and then use terminal to run it. So, I just run something like python code.py. In my script, I'm doing some data analysis where I create variables, but then I want to plot a few different things after running the script. So basically, is there a way I can enter a way to enter into a dynamic python mode so I can then plot different things or add more code. I would rather not just add more lines in the script since it takes time to run the script. For example, at the end of the script, I have created the variables time, amp1, amp2, phase1, phase2, freq1, freq2. And I may want to plot different things together like time vs amp1 or any other combination. In terminal, if I type python it will enter into python mode where I could type out the entire script again and then plot different things and keep all the variables saved, but is there a way to keep all the variables saves locally after running the script?
Typing more python code after running a script
0
0
0
39
42,349,980
2017-02-20T16:45:00.000
1
0
0
1
python,pyspark
42,718,191
5
false
0
0
The Possible Issues faced when running Spark on Windows is, of not giving proper Path or by using Python 3.x to run Spark. So, Do check Path Given for spark i.e /usr/local/spark Proper or Not. Do set Python Path to Python 2.x (remove Python 3.x).
1
22
1
I installed Spark on Windows, and I'm unable to start pyspark. When I type in c:\Spark\bin\pyspark, I get the following error: Python 3.6.0 |Anaconda custom (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Traceback (most recent call last): File "c:\Spark\bin..\python\pyspark\shell.py", line 30, in import pyspark File "c:\Spark\python\pyspark__init__.py", line 44, in from pyspark.context import SparkContext File "c:\Spark\python\pyspark\context.py", line 36, in from pyspark.java_gateway import launch_gateway File "c:\Spark\python\pyspark\java_gateway.py", line 31, in from py4j.java_gateway import java_import, JavaGateway, GatewayClient File "", line 961, in _find_and_load File "", line 950, in _find_and_load_unlocked File "", line 646, in _load_unlocked File "", line 616, in _load_backward_compatible File "c:\Spark\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py", line 18, in File "C:\Users\Eigenaar\Anaconda3\lib\pydoc.py", line 62, in import pkgutil File "C:\Users\Eigenaar\Anaconda3\lib\pkgutil.py", line 22, in ModuleInfo = namedtuple('ModuleInfo', 'module_finder name ispkg') File "c:\Spark\python\pyspark\serializers.py", line 393, in namedtuple cls = _old_namedtuple(*args, **kwargs) TypeError: namedtuple() missing 3 required keyword-only arguments: 'verbose', 'rename', and 'module' what am I doing wrong here?
Unable to run pyspark
0.039979
0
0
21,967
42,350,006
2017-02-20T16:46:00.000
2
0
0
0
python-2.7,computer-vision,homography,opticalflow
51,124,327
2
false
0
1
Optical flow: detect motions from one frame to the next. This is either sparse (few positions of interest are tracked, such as in the LKDemo.cpp example) or dense (one motion per position for many positions(e.g. all pixels) such as Farneback demos in openCV). Regardless of whether you have dense or sparse flow, there are different kinds of transforms that optical flow methods may try to estimate. The most common transform is translation. This is just the offset of postion from frame-to-frame. This can be visualized as vectors per frame, or as color when the flow is dense and high resolution. One is not limited to only estimating translation per position. You can also estimate rotation for example (how a point is rotating, from frame to frame), or how it is skewed. In affine optical flow, you estimate a full affine transform per position (change translation, rotation, skew and scale). Affine flow is a classical and powerful technique that is much misunderstood, and probably used far less then it should. Affine transforms are given most economically by a 2x3 matrix: 6 degrees of freedom, compared to the regular 2 d.o.f. of regular translational optical flow. Leaving the topic of optical flow, An even more general family of transforms is called "Homographies" or "projective transforms". They require a 3x3 transform, and have 8 d.o.f. The affine family is not enough to describe the sort of deformation a plane undergoes, when you view it with projective distortion. Homographies are commonly estimated from many matched points between frames. In that sense, it uses the output of regular translational optical flow(but where the affine approach is often used under the hood to improve the results). All of this only scratches the surface...
1
2
1
i am using optical flow to track some features i am a begineer and was tol to follow these steps Match good features to track Doing Lucas-Kanade Algorithm on them Find homography between 1-st frame and current frame Do camera calibration Decompose homography map Now what i don't understand is the homography part because you find the features and track them using Lucas-Kanade, now the homography is used to compute camera motion(rotation and translation—between two images). but isn't that what the Lucas-Kanade does? or the Lucas-Kanade just tracks them and the homography makes the calculations? I am struggling to understand the difference between them, Thanks in advance.
Homography and Lucas Kanade what is the difference?
0.197375
0
0
1,959
42,350,259
2017-02-20T16:58:00.000
1
0
0
0
python,django,datetime,time,format
42,351,357
4
false
1
0
So apparently the only thing i needed to do was to change "SHORT_DATETIME_FORMAT" by "SHORT_DATE_FORMAT", and that way we get rid of the time part. Thanks guys!
1
0
0
I need to send an email with a date, and the date should be "mm/dd/yyyy". From the database i get the date in this format: 2017-02-26T23:00:00Z So I added from django.utils import formats along with all the imports and then I also added in my function final_date = formats.date_format(input_date, "SHORT_DATETIME_FORMAT") The thing is that i get exactly what i want with the date, but also the time. Is there any way to get rid of the time within my formatting?? I know i can use the function split(), but that seems an ugly way to achieve this. TL;DR What I have --> 02/20/2017 4:40 p.m. What I want --> 02/20/2017
Delete time from datetime format in django
0.049958
0
0
1,390
42,350,592
2017-02-20T17:15:00.000
3
1
0
1
python,enterprise-architect
42,353,157
1
true
0
0
EA is using a RDBMS to store it's repository. In the simplest case, this is a MS Access database renamed to .EAP. You can modify this RDBMS directly, but only if you know what you're doing. The recommended way is to use the API. Often a mix of both is the preferred way. You can use Python in both cases without issues. Shameless self plug: I have published books about EA's internal and also its API on LeanPub.
1
3
0
One of the first things I do on a new project is to knock up a quick script to parse a log file and generate a message sequence chart, as I believe that picture is worth a thousand words. New project, and it is mandated that we use only Enterprise Architect. I have no idea what its save file format is. Is it possible to generate a file which will open in EA from Python? If so, where can I find an example or a tutorial?
Can I generate Enterprise Architect diagrams from Python?
1.2
0
0
1,220
42,351,728
2017-02-20T18:21:00.000
0
0
1
0
windows,python-3.x,cmd,tensorflow
42,352,213
5
false
0
0
So are you sure you correctly downgraded your python? Run this command on command line pip -V. This should print the pip version and the python version.
1
1
1
This question is for a Windows 10 laptop. I'm currently trying to install tensorflow, however, when I run: pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.0.0-cp35-cp35m-win_x86_64.whl I get the following error: tensorflow-1.0.0-cp35-cp35m-win_x86_64.whl is not a supported wheel on this platform. I am trying to install the cpu-version only of tensorflow in an Anaconda 4.3.0 version. I had python 3.6.0 and then I downgraded to 3.5.0, none of them worked.
Tensorflow installation on Windows 10, error 'Not a supported wheel on this platform'
0
0
0
3,732
42,352,104
2017-02-20T18:44:00.000
1
1
0
1
python,shell,ubuntu,docker
42,352,587
1
true
0
0
Simply launch your container with something like docker run -it -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker ... and it should do the trick
1
2
0
If I am on my host machine, I can kickoff a script inside a Docker container using: docker exec my_container bash myscript.sh However, let's say I want to run myscript.sh inside my_container from another container bob. If I run the command above while I'm in the shell of bob, it doesn't work (Docker isn't even installed in bob). What's the best way to do this?
Run shell script inside Docker container from another Docker container?
1.2
0
0
775
42,354,363
2017-02-20T21:16:00.000
0
0
1
1
python,google-cloud-sdk
45,141,694
3
false
0
0
Ensure two things: While installing the GoogleCloudSDK, check the 'BundledPython' option. It will install both python and python3. Make sure your environment variable-PYTHONPATH is pointing to the directory having the python.exe file. This worked out for me.
1
3
0
I get this error message when running install.bat (or install.sh through 'bash' shell) of google-cloud-sdk. Python is version 3.6. Any suggestions?
gcloud.py attributeError: module 'enum' has no attribute 'Int Flag'
0
0
0
4,609
42,354,852
2017-02-20T21:50:00.000
0
0
0
0
python,ubuntu,module,openerp,odoo-10
42,359,847
2
false
1
0
Please check that in addons path there is no any duplicate folder having same name. Sometimes if there is zip file with same name in addons path than it doesn't get affect of any updation.
2
0
0
I'm having a trouble updating a model in odoo the tables of my module won't change when I make changes to the model, even when I restart the server, upgrade the module, delete the module and reinstall it is there a way to make the database synchronized with my model?
Updating a module's model in Odoo 10
0
1
0
1,475
42,354,852
2017-02-20T21:50:00.000
0
0
0
0
python,ubuntu,module,openerp,odoo-10
42,359,793
2
false
1
0
If you save changes to the module, restart the server, and upgrade the module - all changes should be applied. Changes to tables (e.g. fields) should only require the module to be upgraded, not a server reboot. Python changes (e.g. contents of a method) require a server restart, not a module upgrade. If the changes are not occurring, then it is possible that you have a different problem. I would look at things like: are you looking at the correct database/tables, are you saving your changes, are the changes being made to the correct files/in the correct locations.
2
0
0
I'm having a trouble updating a model in odoo the tables of my module won't change when I make changes to the model, even when I restart the server, upgrade the module, delete the module and reinstall it is there a way to make the database synchronized with my model?
Updating a module's model in Odoo 10
0
1
0
1,475
42,356,276
2017-02-20T23:53:00.000
1
0
0
0
php,python,mysql,virtualenv,lamp
42,356,311
2
false
1
0
Read about Docker if You want make separate environments without virtual machine.
1
1
0
I want to isolate my LAMP installment into a virtual environment, I tried using virtualbox but my 4GB of RAM is not helping. My question is if I run sudo apt-get install lamp-server^ while in "venv"... would it install the mysql-server, apache2 and PHP into the virtualenv only or is the installation scope system-wide. I really want a good solution for isolating these dev environments and their dependencies, and am hence exploring simple and efficient options given my system constraints. I have another Django (and mysql and gcloud) solution on the same computer and would like for these new installations to not mess with this. I'm using: OS: Ubuntu 16.04 LTS Python: 2.7
Install LAMP Stack into Virtual Environment
0.099668
1
0
708
42,356,298
2017-02-20T23:56:00.000
1
0
1
0
python,hdl,myhdl
44,230,813
1
false
0
0
From your description I think that you could try to solve you problem using a state machine and spliting your circuits into independent units. Since you said that you don't want to add any paralelism in your design you could try to keep your design within a single circuit, with only one clock and control the behavior using registers. Try to describe the circuit considering that one action will be taken per clock. Try to add some example code and more concrete information.
1
0
0
I'm trying to learn MyHDL by writing a very simple machine with just a handful of instructions and operations. What I'm struggling with is the best way to design my machine to handle operations that take multiple clock cycles to resolve. Currently, all of the hardware components that I've written need just a single tick to resolve, so it's the control unit that I'm having trouble with. For example, let's say that all of my machine's operations take between 1 and 3 clock cycles to complete, which means that I'll need to have 3 cycles for every instruction (as I'm not doing any parallelism right now). This will mean that I need three stages to my machine or, in HDL terms, three clock sensitive logic blocks. One complete iteration of the machine would look like: Control A Tick :: Components Control B Tick :: Components Control C Tick :: Components Back to A Since there is no parallelism and each stage is using shared hardware, I want the control blocks to be triggered sequentially and in order. To do this, would I want multiple clocks? One clock for the components and one for the control where the ticking of the components clock is done at the end of each control phase? But then control A, B, and C will all be executed together by each clock tick. Would I then want four clocks, with one for the components and one for each control phase where the component clock and the clock of the next phase is advanced at the end of each control phase? Or do I want just one clock and some boolean signals that tell each phase when they should be going (every control phase would check a flag and if it's set, they would execute and set the flag for the next phase)?
What's the recommended MyHDL design pattern for multi-tick operations?
0.197375
0
0
89
42,357,450
2017-02-21T02:09:00.000
1
0
0
0
python,scikit-learn,nmf
45,426,365
2
false
0
0
In your data matrix the missing values can be 0, but rather than storing a bunch of zeros for a very sparse matrix you would usually store a COO matrix instead, where each row is stored in CSR format. If you are using NMF for recommendations, then you would be factorising your data matrix X by finding W and H such that W.H approximately equals X with the condition that all three matrices are non-negative. When you reconstruct this matrix X some of the missing values (where you would have stored zeros) may become non-zero and some may remain zero. At this point, in the reconstructed matrix, the values are your predictions. So to answer your question, are they 0's or missing data in the NMF model? The NMF model once fit will contain your predicted values, so I would count them as zero. This is a method of predicting missing values in the data.
1
3
1
I am using Scikit-learn's non-negative matrix factorization (NMF) to perform NMF on a sparse matrix where the zero entries are missing data. I was wondering if the Scikit-learn's NMF implementation views zero entries as 0 or missing data. Thank you!
Scikit-learn non-negative matrix factorization (NMF) for sparse matrix
0.099668
0
0
1,903
42,357,801
2017-02-21T02:51:00.000
0
0
0
0
python,opencv
42,357,893
1
false
0
0
Its dependent on the pixel-distance ratio. You can measure this by taking an image of a meter-stick and and measuring its pixel width (for this example say its 1000px). The ratio of pixels to distance is 1000px/100cm, or 10. You can now use this constant as a multiplier, so for a given length and width in cm., you will just multiply by the ratio, and can get a pixel height and width, which can be passed into opencv's draw rectangle function.
1
0
1
I know how to draw a rectangle in opencv. But can I choose the length and breadth to be in centi meters?
Draw rectangle in opencv with length and breadth in cms?
0
0
0
408
42,358,288
2017-02-21T03:46:00.000
1
0
1
0
python-2.7,hash
42,359,700
1
true
0
0
How about sort the tuple first? All permutations would become the same tuple after sorting and thus give the same hash value.
1
0
0
I have a tuple that is (a, b, c). I want to get a common value to use as a key from this tuple, and I thought of something like hashing. For example, (a, b, c) and (b, a, c) should both give me the same hash value. However, I tried to hash (1, 2, 3) and (2, 1, 3) and ended up with different hash values. How do I do this?
Getting the same "key" for a tuple (Python)
1.2
0
0
46
42,358,632
2017-02-21T04:20:00.000
2
0
1
0
python,vb.net,tkinter
42,358,885
2
false
0
1
You have to think about where your strengths lie. If you are a strong Python coder then go in that direction. if you are a strong VB coder then go in that direction. I would argue that neither of the options you are thinking of using would be ideal. I would actually recommend C# within Visual Studio 2015 community. Python isn't natively compiled to an EXE and there are more hoops to jump though to get a compiled executable. Recently I used C# and Visual Studio 2015 community to create myself a similar small GUI interface for the work I was doing. I have previously used Python with QT. The extra hoops I had to work through to get an EXE from python definatly made the choice of python at the time a downside. C# has a large amount of libraries available for it using the NUGET libraries in Visual Studio. VB is quite dated (but still quite usable) compare to C#. Python also has a large number of currently supported opensource libaries.
2
1
0
I'm making an small audio editor interface, it needs to have a dialog box with 3-4 button options along with activate option, when the user clicks on activate,another dialog box will be popped asking him for his Mac address and a code, i heard Visual basic is good for making .exe but does it give me full control over the application?
Which is Easier to create an .exe file ,Vb or python with tkinter?
0.197375
0
0
431
42,358,632
2017-02-21T04:20:00.000
2
0
1
0
python,vb.net,tkinter
42,362,613
2
false
0
1
As you don't have any experience on C# (As mentioned in a comment) then you should go with VB.Net. I guess it has all the features you described you needed. But I would recommend you to learn C# as soon as possible if you want to become a good Coder because C# language is similar to many other famous/ popular and strong Coding Languages so it will be easier to learn other languages too when you would need them. I too don't have much knowledge about C# and am trying to learn it. Believe me it's much more convenient than VB.Net as per my few experience.
2
1
0
I'm making an small audio editor interface, it needs to have a dialog box with 3-4 button options along with activate option, when the user clicks on activate,another dialog box will be popped asking him for his Mac address and a code, i heard Visual basic is good for making .exe but does it give me full control over the application?
Which is Easier to create an .exe file ,Vb or python with tkinter?
0.197375
0
0
431
42,359,440
2017-02-21T05:33:00.000
0
0
0
0
python,opencv,neural-network,classification,sift
42,635,609
2
false
0
0
It will be good if you apply Normalization on each image before getting the feature extractor.
1
0
1
I'm trying to classify images using an Artificial Neural Network and the approach I want to try is: Get feature descriptors (using SIFT for now) Classify using a Neural Network I'm using OpenCV3 and Python for this. I'm relatively new to Machine Learning and I have the following question - Each image that I analyse will have different number of 'keypoints' and hence different dimensions of the 2D 'descriptor' array. How do I decide the input for my ANN. For example for one sample image the descriptor shape is (12211, 128) so do I flatten this array and use it as an input, in which case I have to worry about varying input sizes for each image, or do I compute something else for the input?
SIFT Input to ANN
0
0
0
578
42,365,761
2017-02-21T11:14:00.000
-1
0
0
0
python,networkx,shortest-path
42,365,979
2
false
0
0
I got it. I created a subgraph instead of every output path from all_simple_paths() and just obtained their sum over an attribute by using size() function.
1
1
0
I want to get the sum of weights (total cost/distance encountered) of a given path in a networkx multigraph. It's like the current shortest_path_length() function but I plan to use it on the paths returned by the all_simple_paths() function. Is there a way to do that? I can't just iterate over all the nodes in the path because since it's a multigraph, I will need the key for that given path to be able to know which edge is used. Thank you.
Calculate sum of weights in NetworkX multigraph given path
-0.099668
0
1
1,902
42,367,340
2017-02-21T12:27:00.000
1
0
1
0
python,debugging,spyder
42,395,122
2
true
0
0
If you're trying to set new breakpoints after starting a debugging session in an IPython console, that was fixed in Spyder 3.1.3. So please update to that version.
2
1
0
I am not able to add breakpoints in my spyder IDE with Python3.6 I have tried restarting my computer, but neither with F12nor by clicking on Debug -> Set/Clear breakpoint let's me enter a breakpoint. Could anybody explain what I am doing wrong? I am sure this is a stupid error. I am running Spyder 3.1.2 on a Windows 10. Thanks.
Spyder and Python Debug: can not set breakpoint
1.2
0
0
4,372
42,367,340
2017-02-21T12:27:00.000
-1
0
1
0
python,debugging,spyder
64,813,310
2
false
0
0
Running ipdb in Spyder 3.6.2 on Win 10. When I hit continue, the debugger bypasses all breakpoints. Doesn't matter whether I set them inside the debugger or prior to in the editor. Breakpoints do not work in Spyder 3.6.2 on Win 10
2
1
0
I am not able to add breakpoints in my spyder IDE with Python3.6 I have tried restarting my computer, but neither with F12nor by clicking on Debug -> Set/Clear breakpoint let's me enter a breakpoint. Could anybody explain what I am doing wrong? I am sure this is a stupid error. I am running Spyder 3.1.2 on a Windows 10. Thanks.
Spyder and Python Debug: can not set breakpoint
-0.099668
0
0
4,372
42,370,620
2017-02-21T14:52:00.000
0
0
0
1
python,parallel-processing,queue,multiprocessing
42,402,047
1
true
0
0
So, we went ahead with creating n number of processes instead of having a suspender. This would not be the ideal approach, but for the time being; it solves the issue at hand. I'd still love a better method to achieve the same.
1
0
0
The problem is still on the drawing board so far, so I can go for another better suited approach. The situation is like this: We create a queue of n-processes, each of which execute independently of the other tasks in the queue itself. They do not share any resources etc. However, we noticed that sometimes (depending on queue parameters) a process k's behaviour might depend on existence of a flag specific to k+1 process. This flag is to be set in a DynamoDB table, and therefore; the execution could fails. What I am currently searching around for is a method so that I can set some sort of waiters/suspenders in my tasks/workers so that they poll until the flag is set in the DynamoDB table, and meanwhile let the other subprocess take up the CPU. The setting of this boolean value is done a little early in the processes themselves. The dependent part of the process comes much later.
Having other subprocess in queue wait until a certain flag is set
1.2
0
0
47
42,371,406
2017-02-21T15:26:00.000
1
0
1
1
python,python-3.x,pip
62,927,458
4
false
0
0
I have python 3.6 and 3.8 on my Ubuntu 18.04 WSL machine. Running sudo apt-get install python3-pip pip3 install my_package_name kept installing packages into Python 3.6 dist directories. The only way that I could install packages for Python 3.8 was: python3.8 -m pip install my_package_name That installed appropriate package into the Python 3.8 dist package directory so that when I ran my code with python3.8, the required package was available.
1
8
0
I have my deployment system running CentOS 6. It has by default python 2.6.6 installed. So, "which python" gives me /usr/bin/python (which is 2.6.6) I later installed python3.5, which is invoked as python3 ("which python3" gives me /usr/local/bin/python3) Using pip, I need to install a few packages that are specific to python3. So I did pip install using:- "sudo yum install python-pip" So "which pip" is /usr/bin/pip. Now whenever I do any "pip install", it just installs it for 2.6.6. :-( It is clear that pip installation got tied to python 2.6.6 and invoking pip later, only installs packages for 2.6.6. How can I get around this issue?
How to install pip for a specific python version
0.049958
0
0
17,626
42,372,121
2017-02-21T15:58:00.000
1
0
0
0
python-3.x,spyder,openpyxl
42,372,863
1
true
0
0
This isn't possible without you writing some of your own code. To do this you will have to write code that can evaluate conditional formatting because openpyxl is a library for the file format and not a replacement for an application like Excel.
1
2
0
I'm working on a Project with python and openpyxl. In a Excel file are some cells with conditional formatting. These change the infillcolor, when the value changes. I need to extract the color from the cell. The "normal" methode worksheet["F11"].fill.start_color.index doesn't work. Excel doesn't interpret the infillcolor from the conditional formatting as infillcolor so i get a '00000000' back for no infill. Anyone knows how to get the infillcolor? Thanks!
Python/openpyxl get conditional format
1.2
1
0
496
42,375,396
2017-02-21T18:40:00.000
-1
1
0
1
python,linux,ssh
42,375,591
3
false
0
0
If these manual stuffs is too many, then I may look into some server configuration managements like Ansible. I have done this kinda automation using: Ansible Python Fabric Rake
1
4
0
So everyday, I need to login to a couple different hosts via ssh and run some maintenance commands there in order for the QA team to be able to test my features. I want to use a python script to automate such boring tasks. It would be something like: ssh host1 deploy stuff logout from host1 ssh host2 restart stuff logout from host2 ssh host3 check health on stuff logout from host3 ... It's killing my productivity, and I would like to know if there is something nice, ergonomic and easy to implement that can handle and run commands on ssh sessions programmatically and output a report for me. Of course I will do the code, I just wanted some suggestions that are not bash scripts (because those are not meant for humans to be read).
Automate ssh commands with python
-0.066568
0
1
9,599
42,380,443
2017-02-22T00:18:00.000
0
0
0
0
python,ebay-sdk
42,380,791
1
true
0
0
I've finally found the way to do this: I have created a user token using the method auth'n'auth. This user token have almost a year of validity, so it can be used for my purpose. Now, there is another question around that.
1
0
0
So, I'm trying to use the ebaysdk-python module, to connect to ebay and get a list of orders. After struggle a little bit with the connection, I've finally have found the ebay.yaml syntax. I have then configured the user and password, but I'm receiving this Error 16112. So, this is my question: is there a way to connect to ebay without interactivity? I mean, without the need to give the permission to get the token and such (oauth)?
Error 16112 - How to connect to Ebay without interactivity?
1.2
0
1
69
42,382,847
2017-02-22T04:40:00.000
4
0
0
0
python,google-sheets,gspread
42,867,483
3
false
0
0
I've run into this issue repeatedly. The only consistent fix I've found is to "re-share" the file with the api user. It already lists the api user as shared (since it's in the same shared folder as everything else), but after "re-sharing" I can connect with gspread no problem. Based on this I believe it may actually be a permissions issue (Google failing to register the correct permission for the API user when accessing it through APIv3).
1
12
0
I have a google drive folder with hundreds of workbooks. I want to cycle through the list and update data. For some reason, gspread can only open certain workbooks but not others. I only recently had this problem. It's not an access issue because everything is in the same folder. I get raise SpreadsheetNotFound when I open_by_key(key). But then when I take the key and paste it into an URL, the sheet opens. Which means it's not the key. What's going on here? I'm surprised other people are not encountering this error. Have I hit my limit on the number of Google sheets I can have? I have about 2 thousand. Update: I find that if I go into the workbook and poke around, the sheet is then recognized??!! What does this mean? It doesn't recognize the sheet if the sheet isn't recently active??? Also if I try using Google App Script SpreadsheetApp.openById, the key is recognized! So the sheet is there, I just can't open it with gspread. I have use Google script to write something to the sheet first before it is recognized by gspread. I'm able to open the sheet using pygsheets but since it is new and so buggy, i can't use it. It looks like a APIv4 issue? Some sheets can't be opened with APIv3? update: here is another observation. Once you open the workbook with APIv4, you can no longer open it with V3.
gspread "SpreadsheetNotFound" on certain workbooks
0.26052
1
0
3,398
42,384,577
2017-02-22T06:52:00.000
0
1
0
0
python,selenium-webdriver,automated-tests,ui-automation,python-appium
42,408,979
3
false
0
0
1.It depends upon how dynamic data is coming. 2. If you want to get toast data while swiping than it becomes hard to get accurate data.
1
0
0
I tried to test getting toast message on android device with Appium 1.6.3, but it is disappointed for me,the rate to correct get toast is very low. Is there anyone help me?
How to improve get toast correct rate on Appium 1.6.3 with uiautomator2?
0
0
1
509
42,386,493
2017-02-22T08:41:00.000
2
0
0
0
python,tensorflow
42,386,754
1
false
0
0
Before asking a question I should probably try to run the code :) Using tf.concat(values=[A, B], concat_dim=3) seems to be working.
1
2
1
My problem is the following: I have a tensor A of shape [None, None, None, 3] ([batch_size, height, width, num_channels]) and a tensor B. At runtime it is guaranteed that A and B will have the same shape. I would like to concatenate these two tensors along num_channels axis. PS. Note that I simplified my original problem - so having a tensor of shape [None, None, None, 6] in the first place is not an option.
Tensorflow: concatenating 2 tensors of shapes containing None
0.379949
0
0
857
42,391,763
2017-02-22T12:39:00.000
0
0
1
0
python-3.x,sorting
42,392,552
1
false
0
0
You shouldn't use an algorithm at all, in the sense of implementing one. Just use the list's sort method, i.e, mylist.sort().
1
0
0
What sorting algorithm should I use in Python, in order to sort a list of elements, where each element can have a large number of digits(like between 1 and 10^5)? And the number of elements in the list is also large(say 10^5).
Best python sorting algorithm to handle large numbers
0
0
0
210
42,394,615
2017-02-22T14:43:00.000
2
0
0
0
python,sql,architecture,slack-api,epicorerp
42,415,248
2
false
0
0
Epicor ERP has a powerful extension system built in. I would create a Business Process Method (BPM) for ReceiptEntry.Update. This wouldn't check for added rows but more specifically where the Recieved flag has been changed to set. This will prevent you getting multiple notifications every time a user saves an incomplete record. In the BPM you can reference external assemblies and call the Slack APIs from there. I strongly recommend you avoid trying to do this at the database level instead of the application level. The schema can change and it is much harder to maintain the system if someone has been adding code to the database. If it isn't done carefully it can break the Data Model Regeneration in the Epicor Administration Console and prevent you from adding UD fields or upgrading your database.
1
1
0
I'm embarking on a software project, and I have a bit of an idea on how to attack it, but would really appreciate some general tips, advice or guidance on getting the task done. Project is as follows: My company has an ERP (Enterprise Resource Planning) system that we use to record all our business activity (i.e. create purchase orders, receive shipments, create sales orders, manage inventory etc..). All this activity is data entry into the ERP system that gets stored in a SQL Server database. I would like to push this activity to certain Slack channels via text messages. For example, when the shipping department creates a 'receipt entry' (they receiving in a package) in the ERP system, then production team would get a text saying 'item X has been received in' in their Slack channel. My current napkin sketch is this: For a given business activity, create a function that executes a SQL query to return the most recent data entry. Store this in my own external database. Routinely execute these calls (Maybe create a Windows scheduler to execute a program that runs through all the functions every 30 minutes or so??), which will compare the data from the query to the data last saved in my external database. If the same, do nothing. But if they're different: Replace the data from my external database with this new data, then use Slacks API to post a message of this new data to Slack. I'm not too certain about the mechanics of executing a program to check for new activity in the ERP system, and also uncertain about using a second database as a means of remembering what was sent to Slack previously. Any advice would be greatly appreciated. Thanks! Josh
Project Advice: push ERP/SQL transaction data to Slack
0.197375
1
0
192
42,395,369
2017-02-22T15:14:00.000
0
0
0
0
python,server,client,zeromq,pyzmq
42,517,557
1
false
0
0
It's not possible to do this in any sort of scalable way without some sort of broker or manager that will manage your communications system. The way that would work is that you have your broker on a known IP:port, and as your server and clients spin up, they connect to the broker, and the broker then tells your endpoints how to communicate to each other. There are some circumstances where such a communication pattern could make sense, generally when the server and clients are controlled by different entities, and maybe even different from the entity controlling the broker. In your case, it sounds like a dramatic amount of over-engineering. The only other way to do what you're looking for that I'm aware of is to just start brute forcing the network to find open IP:port combinations that respond the way you are looking for. Ick. I suggest you just define the IP:port you want to use, probably through some method of static configuration you can change manually as necessary, or that can act as sort of a flat-file broker that both ends of the communication can access.
1
0
0
I am using ZMQ to facilitate communications between one server and multiple clients. Is there a method to have the clients automatically find the ZMQ server if they are on the same internal network? My goal would be to have the client be able to automatically detect the IP and Port it should connect to.
Python automatically find server with ZMQ
0
0
1
130
42,397,764
2017-02-22T16:59:00.000
0
1
1
0
python-2.7,github,travis-ci
42,401,328
1
false
0
0
You would read these important variables into your application as system variables. However, this will only work for builds that are run against master. These environment variables aren't available for builds that are run as part of pull requests.
1
0
0
I have an application I am currently working on for which I am integrating Travis CI. I am running into the problem of API keys being accessed by Travis. Given below is my current setup (without Travis): I have a config.py (and is git ignored) that has API keys for all my interfacing applications. I use ConfigParser to read this file and get the required keys. Travis asks me to look at environment variables as an option to encrypt the keys and add them to .travis.yml. How would Travis know or what needs to be done in order to make travis know that a particular key belongs to a specific interfacing application. Does there need to be changes to the code?
API Keys on .travis.yml and using it in code
0
0
0
51
42,400,159
2017-02-22T19:05:00.000
0
0
0
0
python,pandas,scipy,double,precision
42,403,483
1
false
0
0
You could just write the expression for logsf directly using logs of gamma functions from scipy.special (gammaln, loggamma). And you could send a pull request implementing the logsf for the chi-square distribution.
1
2
1
I have a set of number that can get very small, from 1e-100, to 1e-700 and lower. The precision doesn't matter as much as the exponent. I can load such numbers just fine using Pandas by simply providing Decimal as a converter for all such numeric columns. The problem is, even if I use Python's Decimal, I just can't use scipy.stats.chi2.isf and similar functions since their C code explicitly uses double. A possible workaround is that I can use log10 of the numbers. The problem here is that although there is logsf function, for chi2 it's implemented as just log(sf(...)), and will, therefore, fail when sf returns 0 where it should've returned something like 1e-600. And for isf there is no such log function at all. I wanted to know if there is any way to work with such numbers without resolving to writing all these functions myself for Decimal.
High exponent numbers with scipy.stats functions
0
0
0
105
42,401,638
2017-02-22T20:29:00.000
0
0
0
0
python,numpy,scikit-learn,libsvm,sklearn-pandas
58,574,666
1
false
0
0
I suspect your last two columns consist of only 0's. When loading an libsvm file, it generally doesn't have anything indicating the number of columns. It's a sparse format of col_num:val and will learn the maximum number of columns by the highest column number observed. If you only have 0's in the last two columns, they'll get dropped in this transformation.
1
2
1
I have numpy sparse matrix that I dump in a libsvm format. VC was created using CountVectorizer where the size of the vocabulary is 85731 vc <1315689x85731 sparse matrix of type '<type 'numpy.int64'>' with 38911625 stored elements in Compressed Sparse Row format> But when I load libsvm file back I see that the shape is different. Two columns are gone: data[0] <1315689x85729 sparse matrix of type '<type 'numpy.float64'>' with 38911625 stored elements in Compressed Sparse Row format> I have no idea why this could be happening ? I also loaded the VC sparse matrix as dmatrix. Same issue 2 columns vanish. Hope someone with more experience could point out the issue. Thanks
Shape not the same after dumping to libsvm a numpy sparse matrix
0
0
0
251