Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
45,791,218 | 2017-08-21T07:12:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 45,792,812 | 2 | false | 0 | 0 | You can use the following regular expression to check if a string is base64 encoded or not:
^([A-Za-z0-9+/]{4})*([A-Za-z0-9+/]{4}|[A-Za-z0-9+/]{3}=|[A-Za-z0-9+/]{2}==)$
In base64 encoding, the character set is [A-Z, a-z, 0-9, and + /]. If the rest length is less than 4, the string is padded with '=' characters.
^([A-Za-z0-9+/]{4})* means the string starts with 0 or more base64 groups.
([A-Za-z0-9+/]{4}|[A-Za-z0-9+/]{3}=|[A-Za-z0-9+/]{2}==)$ means the string ends in one of three forms: [A-Za-z0-9+/]{4}, [A-Za-z0-9+/]{3}= or [A-Za-z0-9+/]{2}==. | 2 | 0 | 0 | What max length can I expect to a base64encoded string of length 10 in python?
I need to specify that in my database. | What will be the max length of base64encode string of length 10 in python? | 0 | 0 | 0 | 1,041 |
45,793,618 | 2017-08-21T09:27:00.000 | 2 | 0 | 0 | 0 | python,django,python-social-auth | 45,806,230 | 1 | false | 1 | 0 | The only way to override the URLs is to define your own ones pointing to the views and link it into your main urls.py file.
If what you are after for is to make /login automatically handle the Google auth backend, then you need to define a custom view for it that can call python-social-auth views to fire up the process. | 1 | 1 | 0 | I am using python-social-auth for Google authentication in my Django application. Can I override the python-social-auth URLs ? By default, it's http://mydomain/login/google-oauth2/ and I need to change the URL as part of my view (get request) ; which has the end-point as http://mydomain/login/. | override python-social-auth built-in urls? | 0.379949 | 0 | 1 | 339 |
45,796,411 | 2017-08-21T11:50:00.000 | -1 | 0 | 0 | 0 | javascript,python,html,selenium,web-scraping | 45,796,569 | 3 | false | 1 | 0 | It's not allowed to download a website without Permission. If you would know that, you would also know there is hidden Code on hosting Server, where you as Visitior has no access to it. | 1 | 3 | 0 | I have to download source code of a website like www.humkinar.pk in simple HTML form. Content on site is dynamically generated. I have tried driver.page_source function of selenium but it does not download page completely such as image and javascript files are left. How can I download complete page. Is there any better and easy solution in python available? | Download entire webpage (html, image, JS) by Selenium Python | -0.066568 | 0 | 1 | 1,442 |
45,799,786 | 2017-08-21T14:34:00.000 | 1 | 0 | 1 | 0 | python,logging | 45,800,031 | 1 | false | 0 | 0 | Importing os.devnull and setting it as a default file handler for parent logger maybe?
I usually flush all logs to devnull except that were explicitly set up (dunno if it's a good or bad practice). | 1 | 1 | 0 | Temporary disable logging completely
I am trying to write a new log-handler in Python which will post a json-representation of the log-message to a HTTP endpoint, and I am using the request library for the posting. The problem is that both request and urllib3 (used by request) logs, and their loggers has propagate=True, meaning that the logs they log will be propagated to any parent loggers. If the user of my log-handler creates a logger with no name given, it becomes the root logger, so it will receive this messages, causing an infinite loop of logging. I am a bit lost on how to fix this, and I have two suggestions which both seem brittle.
1) Get the "reguest" and "urllib3" loggers, set their propagate values to false, post the log message before setting the propagate values back to their old values.
2) Check if the incoming record has a name which contains ".request" or ".urllib3", and if it does then ignore the record.
Both of these will break badly if the request library either replaces urllib3 with something else or changes the name of its logger. It also seems likely that method 1 will be problematic in a multi-threaded or multi-process case.
What I would want is some way of disabling all logging for the current thread from some point and then enable it again after we have posted the message, but I don't know any way to do this.
Any suggestions? | Temporary disable logging completely in Python | 0.197375 | 0 | 1 | 197 |
45,800,944 | 2017-08-21T15:34:00.000 | 4 | 0 | 0 | 0 | python,pywinauto | 45,804,604 | 1 | true | 0 | 0 | It's possible through .element_info member of wrapper. But there are inconsistencies for some properties. Say you can access .element_info.name instead of .element_info.title. But .element_info.control_type is consistent as well as .element_info.class_name.
Will think about aligning them in next release. Thanks for the feedback! | 1 | 3 | 0 | Is there a way to get window's properties like: title and control_type using pywinauto?
Because it seems that you can search windows by them, but there's no window attribute that points to these properties. | Getting window's properties using pywinauto | 1.2 | 0 | 0 | 4,789 |
45,801,061 | 2017-08-10T07:34:00.000 | 0 | 0 | 1 | 0 | computer-vision,python | 45,801,062 | 1 | true | 0 | 0 | Yes, you can rewrite your DSP code in Swift, and the code will possibly run lots faster if you do convert from Python. For certain common DSP operations, Swift code can call vDSP functions, which are included inside the iOS Accelerate framework.
Sending an entire image to a server for processing is also possible, but that incurs server infrastructure costs, bandwidth costs, network latency, and privacy concerns, in addition to any processing. | 1 | 0 | 0 | I've made two simple photo effects with python while i am learning image processing. I want these effects to get inside IOS app, and then to try to use them on an image taken from a camera. But i am not sure how to integrate a Python code/script inside IOS. I don't want to write Python code inside IOS, i want to run it from other place if that possible.
Or, should i re-write the code with swift? But i don't want to my app to get messy, so i was looking to call Python scripts via REST API. | Does Swift is used for image processing, or i need to integrate/run python code in IOS? | 1.2 | 0 | 0 | 70 |
45,802,690 | 2017-08-21T17:28:00.000 | 2 | 0 | 1 | 0 | python | 45,802,768 | 3 | false | 0 | 0 | find . -name "*.py" -exec ipython nbconvert --to=python {} \; should work on Linux. | 1 | 2 | 0 | I was wondering if their is anyway to go through and make copies of each .ipynb file in a folder and then change that file to .py. I want to keep the .ipynb files but I would like to have a .py file as well. I know how to do it manually but would like a way to do it automatically for each file in a specified directory. | Converting all files in folder to .py files | 0.132549 | 0 | 0 | 1,236 |
45,803,704 | 2017-08-21T18:40:00.000 | 0 | 0 | 1 | 0 | python-2.7,tkinter,yum,rhel7 | 45,806,096 | 1 | false | 0 | 1 | The tkinter package is in the rhel-7-server-optional-rpms repo, though I don't see pyautogui available in RHEL.
Your mileage may vary on whether tkinter is at the right version needed. | 1 | 0 | 0 | I've disabled the subscription manager and registered a few repos, epel and remi, but am unable to install tkinter. Keep getting the error no package available. The only package available is for python 3.4. Was wondering whether anyone else had run into this issue. Not sure how to resolve it there's not a lot of documentation on the RHEL website.
I've also tried installing tcl/tk and yum can't seem to find these packages either. The one package with tcl in it it fails because of dependency problems. No luck with yum groupinstall -y "development tools" either. I'm mostly just trying to install pyautogui which requires tkinter be installed already. | RHEL 7 installing tkinter with python 2.7 | 0 | 0 | 0 | 1,205 |
45,803,795 | 2017-08-21T18:47:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,cassandra,cassandra-3.0,datastax-python-driver | 45,809,411 | 1 | false | 0 | 0 | Please check if your nodes are really listening by opening up a separate connection from say cqlsh terminal, as you say it is running locally so probably a single node. If that connects, you might want to see how many file handles are open, maybe it is running out of those. We had a similar problem couple of years back, that was attributed to available file handles. | 1 | 1 | 1 | I am consistently getting this error under normal conditions. I am using the Python Cassandra driver (v3.11) to connect locally with RPC enabled. The issue presents itself after a period of time. y assumption was that it was related to max number of connections or queries. Any pointers on where to begin troubleshooting would be greatly appreciated. | DataStax Cassandra cassandra.cluster.NoHostAvailable | 0 | 0 | 0 | 203 |
45,805,185 | 2017-08-21T20:39:00.000 | 4 | 0 | 0 | 0 | python,virtualenv,travis-ci | 45,819,532 | 2 | false | 0 | 0 | Choose a different language which sidesteps Python setup. For example, language: generic.
You would be responsible for doing most other stuff, but that might work for you. | 1 | 0 | 0 | I have a Python project with which I am using zc.buildout and pyenv. Travis seems to automatically activate a virtualenv for my project and I can't find any documentation to disable it.
Is there a good way to disable virtualenv in Travis CI? | How to disable virtualenv at Travis CI? | 0.379949 | 0 | 0 | 271 |
45,806,624 | 2017-08-21T22:59:00.000 | 1 | 0 | 1 | 0 | python,algorithm,text | 45,815,575 | 1 | true | 0 | 0 | basically, a simple brute forces can solve all of your problems. But you should consider another algorithms depend on your requirement (timing, memory,...): Boyer–Moore, Rabin–Karp string search algorithm, Knuth–Morris–Pratt algorithm. | 1 | 0 | 0 | I have a collection of 40-50 text files that contain markdown. Some of them contain duplicate words, sentences, and paragraphs. I'm looking for a script/algorithm to scan the files and help me identify matches (or near matches). Where can I find such a thing? Searching for this type of thing online yielded results for other types of problems, but not this one. Would appreciate any clues to help me narrow my search... | Use Python to find and remove duplicate text in a collection of files | 1.2 | 0 | 0 | 247 |
45,806,967 | 2017-08-21T23:45:00.000 | 0 | 1 | 0 | 1 | python,jenkins | 45,807,814 | 2 | false | 0 | 0 | You can use which python to find which python Jenkins use.
You can use ABSPATH/OF/python to run your pytest | 1 | 0 | 0 | I am running pytest on a Jenkins machine; although I am not sure which Python it is actually running.
The machine is running OSX; and I did install various libraries (like numpy and others), on top of another Python install via Brew, so I keep things separated.
When I run the commands from console; I specify python2.6 -m pytest mytest.py, which works, but when I run the same via shell in Jenkins, it fail, because it can't find the right libraries (which are the extra libraries I did install, after installing Python via Brew).
Is there a way to know what is Jenkins using, so I can force it to run the correct python binary, which has access to my extra libraries? | how to find out which Python is called when I run pytest via Jenkins | 0 | 0 | 0 | 171 |
45,807,417 | 2017-08-22T00:52:00.000 | 1 | 0 | 0 | 0 | python,web-crawler,delay,responsibility | 45,808,077 | 1 | true | 0 | 0 | I'd check their robots.txt. If it lists a crawl-delay, use it! If not, try something reasonable (this depends on the size of the page). If it's a large page, try 2/second. If it's a simple .txt file, 10/sec should be fine.
If all else fails, contact the site owner to see what they're capable of handling nicely.
(I'm assuming this is an amateur server with minimal bandwidth) | 1 | 1 | 0 | What is a responsible / ethical time delay to put in a web crawler that only crawls one root page?
I'm using time.sleep(#) between the following calls
requests.get(url)
I'm looking for a rough idea on what timescales are:
1. Way too conservative
2. Standard
3. Going to cause problems / get you noticed
I want to touch every page (at least 20,000, probably a lot more) meeting certain criteria. Is this feasible within a reasonable timeframe?
EDIT
This question is less about avoiding being blocked (though any relevant info. would be appreciated) and rather what time delays do not cause issues to the host website / servers.
I've tested with 10 second time delays and around 50 pages. I just don't have a clue if I'm being over cautious. | Responsible time delays - web crawling | 1.2 | 0 | 1 | 968 |
45,809,216 | 2017-08-22T05:01:00.000 | 3 | 0 | 1 | 0 | python,json,csv,dictionary | 45,809,281 | 2 | true | 0 | 0 | Like all things performance-related, don't bother optimizing until it becomes a problem. What you're doing is the normal, simple approach, so keep doing it until you hit real bottlenecks. A "huge response" is a relative thing. To some a "huge" response might be several kilobytes, while others might consider several megabytes, or hundreds of megabytes to be huge.
If you ever do hit a bottleneck, the first thing you should do is profile your code to see where the performance problems are actually occurring and try to optimize only those parts. Don't guess; For all you know, the CSV writer could turn out to be the poor performer.
Remember, those JSON libraries have been around a long time, have strong test coverage and have been battle tested in the field by many developers. Any custom solution you try to create is going to have none of that. | 2 | 1 | 0 | I am making an API call that gets a JSON response. However, as the response is huge and I don't need all the information received, I am parsing only the required key:values to a dictionary which I am using to write to a CSV file. Is it a good practice to do? Should I parse the JSON data directly to create the CSV file? | Is it a good practice to parse a JSON response to a python dictionary? | 1.2 | 0 | 0 | 277 |
45,809,216 | 2017-08-22T05:01:00.000 | 0 | 0 | 1 | 0 | python,json,csv,dictionary | 45,809,420 | 2 | false | 0 | 0 | If u want to write only particular key:value pairs into csv file, it is better to convert json into python dictionary with selected key:value pairs and write that into csv file. | 2 | 1 | 0 | I am making an API call that gets a JSON response. However, as the response is huge and I don't need all the information received, I am parsing only the required key:values to a dictionary which I am using to write to a CSV file. Is it a good practice to do? Should I parse the JSON data directly to create the CSV file? | Is it a good practice to parse a JSON response to a python dictionary? | 0 | 0 | 0 | 277 |
45,811,440 | 2017-08-22T07:28:00.000 | 0 | 0 | 0 | 0 | python,sqlite | 45,811,504 | 1 | false | 0 | 0 | Asking a password when opening a file doesn't make much sense, it will take another program to do that, watching the file and intercepting the request at os level..
What you need to do is protect the file using ACL, setting the proper access rights to only desired users&groups. | 1 | 0 | 0 | I just want to set a password to my file "file.db" (SQLite3 database), if someone trying to open this DB it has to ask password for authentication.
is there any way to do this Using python.
Thanks in Advance. | protecting DB using Python | 0 | 1 | 0 | 52 |
45,812,813 | 2017-08-22T08:39:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 45,829,549 | 1 | false | 0 | 0 | It turns out it was because I didn't have an interpreter set for my project (and not that it was placed in a site-packages directory). I assumed that it did due to the fact that PyCharm didn't have it's usual "no interpreter configured for this project" alert. Finally figured it out when I went poking in the settings. | 1 | 0 | 0 | I recently created a Python package which I use in most of my projects so I moved it to my user-site directory (python -m site --user-site). Everything works perfectly now except for the fact that PyCharm has disabled code inspection for it.
Is there any way for me to enable code inspection without moving the project back to a different directory? | How to enable PyCharm code inspection for packages | 0 | 0 | 0 | 55 |
45,813,527 | 2017-08-22T09:16:00.000 | 0 | 0 | 1 | 0 | python,matplotlib,anaconda,windows-subsystem-for-linux | 45,832,556 | 1 | true | 0 | 0 | It looks like when anaconda or matplotlib was installed it's created the matplotlibrc file in C:\Users\user\AppData\Local\lxss\home\puter\anaconda3\lib\python3.6\site-packages\matplotlib\mpl-data using the windows environment. This has caused the file not to be recognised in WSL.
To fix this create another matplotlibrc file in bash or whatever shell you're using. In the directory listed above copy the contents of the previously created matplotlibrc file into your new matplotlibrc file. Make sure you don't create this file in the windows environment otherwise it won't be recognised. | 1 | 0 | 1 | I'm currently using anaconda and python 3.6 on windows bash. Every time i want to use matplotlib I have to paste a copy of the matplotlibrc file into my working directory otherwise my code won't run or plot and I get the warning - /home/computer/anaconda3/lib/python3.6/site-packages/matplotlib/init.py:1022: UserWarning: could not find rc file;returning defaults
my matplotlibrc file is located at C:\Users\user\AppData\Local\lxss\home\puter\anaconda3\lib\python3.6\site-packages\matplotlib\mpl-data
I thought to fix this I could edit my .condarc file and set it to look for matplotlibrc in the correct directory. Could anyone tell me how to do this? | how to change matplotlibrc default directory | 1.2 | 0 | 0 | 950 |
45,817,518 | 2017-08-22T12:21:00.000 | 1 | 0 | 1 | 0 | python,file,exe | 45,817,605 | 1 | false | 0 | 0 | you can use the linux tool strings | 1 | 0 | 0 | Is it possible to read exe file and extract all strings used in this exe? I need to check what kind of strings are used in some exe file, but I'm not sure that is a simple as I think - just read the file and get strings. | How do I extract strings used in exe file in Python? | 0.197375 | 0 | 0 | 626 |
45,817,703 | 2017-08-22T12:30:00.000 | 0 | 0 | 1 | 0 | python-3.x | 45,877,103 | 1 | false | 0 | 1 | This is what I have found:
There is a designer from QT, to build a ui file. There is a tool for translating the ui into python. Then you can edit the logic, with any python tool. You only need PyQt the current version is PyQt5. | 1 | 0 | 0 | I want to do some applications with python, but I haven't found any way of getting a tool-box of buttons, check box, etc.
Can some explain me please how can I do that with:
1. Pycharm.
2. If it is problem with Pycharm, visual studio community is also okay.
Thanks,
Ayal | Python windows forms application | 0 | 0 | 0 | 795 |
45,820,821 | 2017-08-22T14:47:00.000 | 1 | 0 | 1 | 0 | windows,python-2.7 | 45,821,310 | 1 | true | 0 | 0 | Alright. So basically the new installer will create new associations for .py and .pyw. When you'll double click - the new version will be executed. Also it could change your PATH environment variable thus changing the default python.exe you'll execute when you use cmd or other indirect method of calling "general python".
As long as you call specifically python.exe from the folder itself, there should be no problem. | 1 | 1 | 0 | I just tried to install a new instance of python on my C drive under a new directory, and I received a warning that my existing python instance would be replaced. Under what condition will this occur? How do I prevent this and allow them not to affect each other?
For instance, I have the following installations:
C:\Anaconda2\Python.exe
C:\Python27\Python.exe
C:\NewDir\NewDir\Python.exe
I received the error message when trying to install the last item on the list above. I know most responses are going to ask why on God's green earth I would configure my python instances this way, but for now, I'm trying to better understand the conflict. So please focus on that and not the design of the installations/ environment. I'm working with some legacy installs I need to clean up. | Multiple Instances of Python on C Drive - Windows | 1.2 | 0 | 0 | 110 |
45,822,389 | 2017-08-22T16:03:00.000 | 1 | 0 | 1 | 0 | python,linux,pip | 71,218,863 | 2 | false | 0 | 0 | Do:
pip freeze > requirements.txt
It will store all your requirements in file requirements.txt
pip wheel -r requirements.txt --wheel-dir="packages"
It will pre-package or bundle your dependencies into the directory packages
Now you can turn-off your Wi-fi and install the dependencies when ever you want from the "packages" folder.
Just Run this:
pip install --force-reinstall --ignore-installed --upgrade --no-index --no-deps packages/*
Thanks :) | 1 | 0 | 0 | I was wondering how to make a .tar.gz file of all pip packages used in a project. The project will not have access to the internet when a user sets up the application. So, I though it would be easiest to create .tar.gz file that would contain an all the necessary packages and the user would just extract and install them with a setup.py file (example) or something along those lines. Thanks | How do you make a .tar.gz file of all your pip packages used in a project? | 0.099668 | 0 | 0 | 1,561 |
45,822,897 | 2017-08-22T16:33:00.000 | 2 | 0 | 0 | 0 | selenium-chromedriver,pythonanywhere,google-chrome-headless | 45,845,020 | 2 | false | 0 | 0 | PythonAnywhere dev here -- unfortunately Chrome (headless or otherwise) doesn't work in our virtualization environment right now, so it won't work :-(
[edit] ...but now it does! See @Ralf Zosel's answer for more details. | 1 | 0 | 0 | I'd like to set up Chrome in headless mode and the ChromeDriver for Selenium testing on my PythonAnywhere instance. I can't find any instructions on how to sort this out. Does anyone have any advice/pointers to docs please? | How to Set Up Chrome Headless on PythonAnywhere? | 0.197375 | 0 | 1 | 365 |
45,824,737 | 2017-08-22T18:39:00.000 | 0 | 0 | 0 | 1 | python-2.7,tensorflow | 45,825,713 | 2 | false | 0 | 0 | Use Python EasyInstall, is super easy:
sudo easy_install pip | 1 | 0 | 1 | I installed Tensorflow on my macOS Sierra using pip install tensorflow.
Im getting the following error:
OSError: [Errno 1] Operation not
permitted:'/var/folders/zn/l9gmn4613677f6mlrh6prtb00000gn/T/pip-xv3AU6-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy-1.8.0rc1-py2.7.egg-info'
Is there anyway to resolve this? | Error in installing Tensorflow on Mac | 0 | 0 | 0 | 109 |
45,825,401 | 2017-08-22T19:20:00.000 | 0 | 0 | 0 | 0 | python,excel,openpyxl | 45,868,428 | 1 | false | 0 | 0 | Asked the dev about it -
There is a dispBlanksAs property of the ChartContainer but this currently isn't accessible to client code.
I looked through the source some more using that answer to guide me. The option is definitely in there, but you'd have to modify source and build locally to get at it.
So no, it's not accessible at this time. | 1 | 0 | 1 | When selecting a data source for a graph in Excel, you can specify how the graph should treat empty cells in your data set (treat as zero, connect with next data point, leave gap).
The option to set this behavior is available in xlsxwriter with chart.show_blanks_as(), but I can't find it in openpyxl. If anyone knows where to find it or can confirm that it's not present, I'd appreciate it. | How to replicate the "Show empty cells as" functionality of Excel graphs | 0 | 1 | 0 | 101 |
45,828,456 | 2017-08-22T23:36:00.000 | 1 | 0 | 1 | 0 | python,ghostscript | 45,833,436 | 2 | true | 1 | 0 | You are installing on Windows, the Windows binary differs in name from the Linux binaries and indeed differs depending whether you installed the 64 or 32-bit version.
On Linux (and MacOS) the Ghostscript binary is called 'gs', on Windows its 'gswin32' or 'gswin64' or 'gswin32c' or 'gswin64c' depending on whether you want the 32 or 64 bit version, and the command line or windowed executable.
My guess is that your script is looking for simply 'gs' and is probably expecting the path to be in the $PATH environment variable, its not clear to me what its expecting.
You could probably 'fix' this by making sure the installation path is in the $PATH environment variable and copying the executable to 'gs.exe' in that directory.
Other than that you'll need someone who can tell you what the script is looking for. Quite possibly you could just grep it. | 1 | 1 | 0 | I'm trying to generate a pdf417 barcode in python using treepoem but pycharm keeps giving me the following error:
Traceback (most recent call last):
File "C:/Users/./Documents/barcodes.py", line 175, in
image = generate_barcode(barcode_type="pdf417",data=barcode, options=dict(eclevel=5, rows=27, columns=12))
File "C:\Users.\AppData\Local\Programs\Python\Python36-32\lib\site-packages\treepoem__init__.py", line 141, in generate_barcode
bbox_lines = _get_bbox(code)
File "C:\Users.\AppData\Local\Programs\Python\Python36-32\lib\site-packages\treepoem__init__.py", line 81, in _get_bbox
ghostscript = _get_ghostscript_binary()
File "C:\Users.\AppData\Local\Programs\Python\Python36-32\lib\site-packages\treepoem__init__.py", line 108, in _get_ghostscript_binary
'Cannot determine path to ghostscript, is it installed?'
treepoem.TreepoemError: Cannot determine path to ghostscript, is it installed?
I've tried to install ghostcript, using both the .exe I found online and using pip install ghostscript (successfully completed the first time, and now tells me the requirement is satisfied), yet I still keep getting this error. Any ideas on how to fix it? | Treepoem barcode generator unable to find ghostscript | 1.2 | 0 | 0 | 2,240 |
45,830,522 | 2017-08-23T04:11:00.000 | 1 | 0 | 0 | 0 | javascript,xml,python-2.7,openerp,odoo-8 | 45,907,926 | 1 | false | 1 | 0 | I am not aware of any module that implements this.
This functionality already exists on the Advanced Search, you add conditions and then click Apply you can take a look at the corresponding widget and copy functionality.
What you need to do is modify the javascript so that upon the clicking of a filter it should be added on the oe_searchview view but the search_read method will not be invoked. You need to start the modifications from web.search.FilterGroup widget and specifically from the search_change method which is invoked every time you click on the filters. | 1 | 5 | 0 | Currently I am using odoov8, my problem is that I have created many filters from xml code as per my requirement and all are working fine ,but I can select only one filter at a time .
so for example I want to apply any 3 Filters in tree view , then I need to select first one , then system loads the data , then I select 2nd and then 3rd filter , so system is loading after applying each filter .
I want to achieve that if I can select all my filters at once and then I can apply search , so system loads after I apply for search , no matter how much time it taking but I should not require to search single filter .
so is there any custom module or way from which I can achieve this .
Thanks in advance . | Apply Multiple filter in odoo at a time | 0.197375 | 0 | 0 | 712 |
45,830,892 | 2017-08-23T04:55:00.000 | 3 | 0 | 1 | 0 | python | 45,830,954 | 2 | false | 0 | 0 | Just write a comment in the __init__.py. that will do. | 1 | 5 | 0 | I needed an empty __init__.py file in order to call a class from my main in a project I was working on. If I understand correctly, this is the constructor?
Since github doesn't allow files that are empty to be added to my repository, I was wondering if it actually wasn't necessary for my project or if I needed a work around? | Should I add __init__.py to github? | 0.291313 | 0 | 0 | 5,604 |
45,833,114 | 2017-08-23T07:20:00.000 | 0 | 0 | 1 | 0 | python | 45,833,271 | 1 | false | 0 | 0 | You're thinking of this in the wrong way. If they're coming to your website, then they're using your web app. It's only that that needs to run under the virtual environment, which you would configure in its own startup script (eg the wsgi script). | 1 | 0 | 0 | I have shared web hosting space with python 2.6.6 pre configured, now I have installed python 3, I have created the virtual env for the same & activated it.
My question is - How can I keep the python3 virtual environment activated all the time, even when am not using the console/putty.
The problem is I have imported couple of libraries to python3 & want to use it, but if my console/putty is closed my header line in .py files has to be pointed back to #!/usr/bin/python >>but this points to python2.6.6.
Whereas I want the python3 should always work. All the users coming to my website, their requests needed to be processed by python3 instead of python2.6.6.
Really searched a lot but could not get this specific information.
Thanks... | How to make Python 3 virtual env activated for all the time, even when not using console/putty | 0 | 0 | 0 | 118 |
45,834,276 | 2017-08-23T08:16:00.000 | 2 | 0 | 0 | 0 | python,numpy,image-preprocessing | 45,834,527 | 4 | false | 0 | 0 | Key here are the assignment operators. They actually performs some operations on the original variable.
a += c is actually equal to a=a+c.
So indeed a (in your case x) has to be defined beforehand.
Each method takes an array/iterable (x) as input and outputs a value (or array if a multidimensional array was input), which is thus applied in your assignment operations.
The axis parameter means that you apply the mean or std operation over the rows. Hence, you take values for each row in a given column and perform the mean or std.
Axis=1 would take values of each column for a given row.
What you do with both operations is that first you remove the mean so that your column mean is now centered around 0. Then, when you divide by std, you happen to reduce the spread of the data around this zero, and now it should roughly be in a [-1, +1] interval around 0.
So now, each of your column values is centered around zero and standardized.
There are other scaling techniques, such as removing the minimal or maximal value and dividing by the range of values. | 1 | 16 | 1 | I saw in tutorial (there were no further explanation) that we can process data to zero mean with x -= np.mean(x, axis=0) and normalize data with x /= np.std(x, axis=0). Can anyone elaborate on these two pieces on code, only thing I got from documentations is that np.mean calculates arithmetic mean calculates mean along specific axis and np.std does so for standard deviation. | Numpy:zero mean data and standardization | 0.099668 | 0 | 0 | 42,882 |
45,838,549 | 2017-08-23T11:32:00.000 | 0 | 0 | 0 | 0 | python,sockets | 51,373,042 | 1 | true | 0 | 0 | It was because I was generating a new socket each time, rather than just re-using one socket. | 1 | 0 | 0 | I'm writing a basic socket program in Python3, which consists of three different programs - sender.py, channel.py, and receiver.py. The sender should send a packet through the channel to the receiver, then receiver sends an acknowledgement packet back.
It works for sending one packet - it goes through the channel to the receiver, and the receiver sends an acknowledgement packet through the channel to the sender, which gets it successfully. But when the sender tries to send a second packet, it attempts to send it but gets no response, so it sends it again. When it does, it gets BrokenPipeError: [Errno 32] Broken pipe. The channel gives no indication that it receives the second packet, and just sits there waiting. What does this mean and how can it be avoided?
I never call close() on any of the sockets. | How do I avoid a BrokenPipeError while using the sockets module in Python? | 1.2 | 0 | 1 | 56 |
45,845,425 | 2017-08-23T16:55:00.000 | 1 | 0 | 0 | 0 | python,pandas,dataframe,sampling,balance | 45,845,595 | 2 | false | 0 | 0 | My best guess: 'protect' one random row from each id (create separate dataframe with those rows), then delete from original dataframe until satisfied (including the fact that the classes in the 'protected' dataframe will line up flush with what remains) and concatenate the two dataframes? | 1 | 1 | 1 | I am looking for an elegant way to sample a dataset in a specific way. I found a few solutions, but I was wondering if any of you know a better way.
Here is the task I am looking at:
I want to balance my dataset, so that I have the same amount of instances for class 0 as for class 1, so in the example below we have 5 instances of class 1 and 11 instances of class 0:
id | class
------ | ------
1 | 1
1 | 0
1 | 0
1 | 0
1 | 0
2 | 1
2 | 1
2 | 0
2 | 0
2 | 0
3 | 1
3 | 1
3 | 0
3 | 0
3 | 0
3 | 0
Sofar I have just deleted randomly 6 instances of class 0, but I would like to prevent that all instances of one id could get deleted. I tried doing a stratified "split", with sklearn, but it does not work, because not every id contains more than 1 item. The desired output should look similar to this:
id | class
------ | ------
1 | 1
1 | 0
2 | 1
2 | 1
2 | 0
2 | 0
3 | 1
3 | 1
3 | 0
3 | 0
Any good ideas? | Python Pandas Dataframe sampling | 0.099668 | 0 | 0 | 749 |
45,846,441 | 2017-08-23T17:54:00.000 | 1 | 0 | 1 | 0 | python,heap | 45,846,492 | 1 | false | 0 | 0 | Both operations can violate the heapq invariant, so yes, you should heapify after either operation. | 1 | 0 | 0 | Let's say I've built a heap by using heappush to push 10 numbers onto the heap.
Now I want to remove certain elements, the nth element or a particular number.
Since the heap is just a list, I can use pop(i) or remove to do that.
After using pop() and remove(), is the list still a heap or do I have to heapify the list again? | Using "remove" or "pop" to delete elements from heapq | 0.197375 | 0 | 0 | 541 |
45,848,956 | 2017-08-23T20:42:00.000 | 2 | 0 | 1 | 0 | python,sql,json,postgresql,pickle | 45,850,429 | 2 | false | 0 | 0 | What you want to do is store a one-to-many relationship between a row in your table and the members of the set.
None of your solutions allow the members of the set to be queried by SQL. You can't do something like select * from mytable where 'first item' in myset. Instead you have to retrieve the text/blob and use another programming language to decode or parse it. That means if you want to do a query on the elements of the set you have to do a full table scan every time.
I would be very reluctant to let you do something like that in one of my databases.
I think you should break out your set into a separate table. By which I mean (since that is clearly not as obvious as I thought), one row per set element, indexed over primary key of the table you are referring from or, if you want to enforce no duplicates at the cost of a little extra space, primary key of the table you are referring from + set element value.
Since your set elements appear to be of heterogeneous types I see no harm in storing them as strings, as long as you normalize the numbers somehow. | 1 | 0 | 0 | I would like to store a "set" in a database (specifically PostgreSQL) efficiently, but I'm not sure how to do that efficiently.
There are a few options that pop to mind:
store as a list ({'first item', 2, 3.14}) in a text or binary column. This has the downside of requiring parsing when inserting into the database and pulling out. For sets of text strings only, this seems to work pretty well, and the parsing is minimal. For anything more complicated, parsing becomes difficult.
store as a pickle in a binary column. This seems like it should be quick, and it is complete (anything picklable works), but isn't portable across languages.
store as json (either as a binary object or a text stream). Larger problems than just plain text, but better defined parsing.
Are there any other options? Does anyone have any experience with these? | How to store a "set" (the python type) in a database efficiently? | 0.197375 | 1 | 0 | 76 |
45,852,491 | 2017-08-24T03:33:00.000 | 0 | 0 | 0 | 1 | java,python,user-interface,javafx,console | 45,989,477 | 1 | true | 1 | 0 | My solution was to still call the Python code from the Java Processbuilder, but use the -u option like python -u scriptname.py to specify unbuffered Python output. | 1 | 0 | 0 | I built a GUI in JavaFX with FXML for running a bunch of different Python scripts. The Python scripts continuously collect data from a device and print it to the console as it's collected in a loop at anywhere from around 10 to 70 Hz depending on which script was being run, and they don't stop on their own.
I want the end-user to be able to click a button on my GUI which launches the scripts and lets them see the output. Currently, the best I have done was using Runtime.exec() with the command "cmd /c start cmd /k python some_script.py" which opens the windows command prompt, runs python some_script.py in it, and keeps the command prompt open so that you can see the output. The problem with this is that it only works on Windows (my OS) but I need to have universal OS support and that it relies on Java starting an external program which I hear is not very elegant.
I then tried to remedy this by executing the python some_script.py command in Java, capturing the process output with BufferedReader, creating a new JavaFX scene with just a TextArea in an AnchorPane to be a psuedo-Java-console and then calling .setText() on that TextArea to put the script output in it.
This kinda worked, but I ran into many problems in that the writing to the JavaFX console would jump in big chunks of several dozens of lines instead of writing to it line by line as the Python code was making Print() calls. Also, I got a bunch of NullPointerException and ArrayIndexOutOfBoundsException somewhat randomly in that Java would write a couple of hundred lines correctly but then throw those errors and freeze the program. I'm pretty sure both of these issues were due to having so much data at such high data rates which overflowed the BufferedReader buffer and/or the TextArea.setText() cache or something similar.
What I want to know is what approach I should take at this. I cannot migrate the Python code to Java since it relies on someone else's Python library to collect its data. Should I try to keep with the pseudo-Java-console idea and see if I can make that work? Should I go back to opening a command prompt window from Java and running the Python scripts and then add support for doing the same with Terminal in Mac and Linux? Is there a better approach I haven't thought of? Is the idea of having Java code call Python code and handle its output just disgusting and a horrible idea?
Please let me know if you would like to see any code (there is quite a lot) or if I can clarify anything, and I will try my best to respond quickly. Thank you! | JavaFX show looping Python print output | 1.2 | 0 | 0 | 250 |
45,853,972 | 2017-08-24T05:58:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,python-3.x,console,pycharm | 45,854,154 | 1 | true | 0 | 0 | Here is a workaround: Enable Floating Mode in the console. The cursor will go there when the program runs.
Note: You can hide the floating console with Shift+Escape | 1 | 3 | 0 | When running your program in Pycharm, and you have an input function in the code such as input("Type True or False"). Is there a configuration setting or something to have the cursor automatically go into the console instead of having to always click in there manually? | Automatic cursor positioning in Pycharm | 1.2 | 0 | 0 | 695 |
45,854,752 | 2017-08-24T06:49:00.000 | 2 | 1 | 0 | 0 | python,python-3.x,amazon-web-services,amazon-ec2,aws-lambda | 45,856,165 | 1 | true | 1 | 0 | You could do it using boto3, but I would advise against that architecture. A better approach would be to use a load balancer (even if you only have one instance), and then use the CNAME record of the load balancer in your application (this will not change for as long as the LB exists).
An even better way, if you have access to your own domain name, would be to create a CNAME record and point it to the address of the load balancer. Then you can happily use the DNS name in your Lambda function without fear that it would ever change. | 1 | 0 | 0 | I am writing a lambda function on Amazon AWS Lambda. It accesses the URL of an EC2 instance, on which I am running a web REST API. The lambda function is triggered by Alexa and is coded in the Python language (python3.x).
Currently, I have hard coded the URL of the EC2 instance in the lambda function and successfully ran the Alexa skill.
I want the lambda function to automatically obtain the IP from the EC2 instance, which keeps changing whenever I start the instance. This would ensure that I don't have to go the code and hard code the URL each time I start the EC2 instance.
I stumbled upon a similar question on SO, but it was unanswered. However, there was a reply which indicated updating IAM roles. I have already created IAM roles for other purposes before, but I am still not used to it.
Is this possible? Will it require managing of security groups of the EC2 instance?
Do I need to set some permissions/configurations/settings? How can the lambda code achieve this?
Additionally, I pip installed the requests library on my system, and I tried uploading a '.zip' file with the structure :
REST.zip/
requests library folder
index.py
I am currently using the urllib library
When I use zip files for my code upload (I currently edit code inline), it can't even accesse index.py file to run the code | Obtain EC2 instance IP from AWS lambda function and use requests library | 1.2 | 0 | 1 | 613 |
45,855,209 | 2017-08-24T07:13:00.000 | 1 | 0 | 0 | 0 | python,math,statistics | 45,855,698 | 2 | true | 0 | 0 | I believe you are trying to resample your data. Your current sample rate is 1/60 samples per second and you are trying to get to 1/96 samples per second (900 / (24*60*60)). The ratio between the two rates is 5/8.
If you search for "python resample" you will find other similar questions and articles involving numpy and pandas which have built in routines for it.
To do it manually you can first upsample by 5 to get to 7200 samples per second and then downsample by 8 to get down to 900 samples per second.
To upsample you can make a new list five times as long and fill in every fifth element with your existing data. Then you can do, say, linear interpolation to fill in the gaps.
One you do that you can downsample by simply taking every eighth element. | 1 | 0 | 1 | I'm developing a line chart. The data is being generated by a sensor and is a tuple (timestamp, value). Sensor creates a new datapoint every 60 seconds or so.
Now I want to display it in a graph and my limitation is about 900 points on then graph. In a daily view of that graph, I'd get about 1440 points and that's too much.
I'm looking for a general way how to shrink my dataset of any size to fixed size (in my case 900) while it keeps the timestamp distribution linear.
Thanks | Reduce dataset to smaller size, keep the gist of information in the dataset | 1.2 | 0 | 0 | 1,337 |
45,856,131 | 2017-08-24T08:02:00.000 | 0 | 0 | 0 | 0 | python,tkinter,py2app | 45,869,121 | 1 | false | 0 | 1 | Solved but I'm not sure it should have to work like this.
I think the problem is that Mac works with Python 2.7 (system Python) which raises an error when it gets "import tkinter".
So I installed with Homebrew Python 2.7, redesigned my modules written in P 3.6 ("import Tkinter" etc.) py2app-ed it and everything went fine. | 1 | 1 | 0 | if I put "import tkinter" in my Python (3.6) file the mac app I get with py2app does not work.
py2app works just fine with other Python files importing other packages (math, time, random etc) only importing tkinter raises this problem | py2app does not work with tkinter | 0 | 0 | 0 | 558 |
45,856,240 | 2017-08-24T08:08:00.000 | 0 | 0 | 0 | 0 | python-2.7,wxpython | 45,899,941 | 2 | false | 0 | 1 | It is much better to use wx.Dialog when you need a modal window, as it is designed from the start to behave that way. In fact, because of the inconsistencies the MakeModal method has been removed for frames, although if you really need to do it that way there is a workaround. | 1 | 0 | 0 | I encounter some annoying behavior with modal frames in WxPython.
As we all know, when a modal window shows up, it appears on foreground (on top of the main window) and the main window should become inaccessible (no response to click for example).
This is working as expected with nested WxFrames until using the Windows start (status)bar.
If user clicks on main frame on Windows bar, it appears on top of the second frame, which is totally inaccurate as user does not understand what's happening and why the window is inaccessible.
The first solution coming on my mind is to bind activation event of the first frame, and set programmatically (& systematically) the second frame to foreground. However it appears to me weird that this behavior is not already done naturally by WxPython.
Does anyone have any idea or any native / generic solution for that? | WxPython - Inaccurate modal windows behavior | 0 | 0 | 0 | 89 |
45,857,602 | 2017-08-24T09:15:00.000 | 0 | 0 | 0 | 0 | python,graph,networkx | 45,891,842 | 1 | false | 0 | 0 | If you stick to networkx, you can generate two large complete graphs with nx.complete_graph(), merge them, and then add some edges connecting randomly chosen nodes in each graph. If you want a more realistic example, build dense nx.erdos_renyi_graph()s instead of the complete graphs. | 1 | 0 | 1 | I am trying to validate a system to detect more than 2 cluster in a network graph. For this i need to create a synthetic graph with some cluster. The graph should be very large, more than 100k nodes at least. I s there any system to do this? Any known dataset with more than 2 cluster would also be suffice. | Synthetic network graph | 0 | 0 | 1 | 182 |
45,859,384 | 2017-08-24T10:37:00.000 | 0 | 0 | 0 | 1 | python,apache-kafka,kafka-producer-api,pykafka | 45,862,977 | 2 | false | 0 | 0 | Just use the send() method. You do not need to manage it by yourself.
send() is asynchronous. When called it adds the record to a buffer of
pending record sends and immediately returns. This allows the producer
to batch together individual records for efficiency.
Your task is only that configure two props about this: batch_size and linger_ms.
The producer maintains buffers of unsent records for each partition.
These buffers are of a size specified by the ‘batch_size’ config.
Making this larger can result in more batching, but requires more
memory (since we will generally have one of these buffers for each
active partition).
The two props will be done by the way below:
once we get batch_size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will ‘linger’ for the specified time waiting for more records to show up. | 1 | 1 | 0 | How to produce kafka topic using message batch or buffer with pykafka. I mean one producer can produce many message in one produce process. i know the concept using message batch or buffer message but i dont know how to implement it. I hope someone can help me here | How to produce kafka topic using message batch or buffer with pykafka | 0 | 0 | 0 | 4,131 |
45,861,337 | 2017-08-24T12:12:00.000 | 0 | 0 | 1 | 0 | python,excel,parsing,data-mining | 45,897,224 | 1 | false | 0 | 0 | Python's standard library includes the csv module, which contains a Reader class and defined Excel-like dialect for parsing CSV data. The csv Reader classes will give you a generator of lists or dicts (if there is a header row), which you could then reroute to one of the Python-Excel integration libraries - these would all be third-party, likely open source, but not included in the standard lib. This is about as turnkey as you will find. | 1 | 0 | 0 | I am trying to build a python script to parse a huge amount of data. I will be generating data from an existing tool which will be parsed by python tool and put into an excel sheet. I haven't yet figured out how the input data has to be. Is there any particular format or patterns anyone would suggest to make parsing easier? The approach in my mind is to use regular expressions and find places in junk data to identify blocks and such.
Is there any standard or format - anything of that sort which will improve the parsing as regular expressions can only be relied on assuming the format of inut data wont change
I believe regex is a bad idea as its error prone. This is why i am seeking other options. Here, i have the option to format or modify raw data also unlike usual scenarios. So, i would like to know all possible ways to make the report generation easier. | Standards for input data to pass to a python parsing tool | 0 | 0 | 0 | 39 |
45,862,512 | 2017-08-24T13:08:00.000 | 0 | 0 | 1 | 0 | python,sysv-ipc | 46,037,104 | 1 | true | 0 | 0 | I'm the author of the Python sysv_ipc module.
Without seeing your code, I can't say for sure what's happening. But I have a hunch.
In your monitor code, compare the memory segment's last_pid value to the value of os.getpid(). If it's the same, then there's your answer -- last_attach_time is correctly reporting the time that your monitor program attached to the memory to see if anyone attached to it. :-)
Fuller explanation: Using a SysV IPC memory segment is a two-step process. First you create it, then you attach it. You can't do much with a memory segment that you haven't attached, so I wrote the sysv_ipc module to automatically attach the segment for you in the Python constructor. In other words, the Python constructor does both steps (create and attach) for you. That's what it means when the documentation for the constructor says "The memory is automatically attached" (but that's easy to overlook).
So if your monitor code creates a new sysv_ipc.Semaphore() object every time it runs, it will set last_attach_time when it does so.
It sounds like you're more interested in the last write time which is not a value that SysV IPC provides. One approach would be to write a timestamp as part of the data you write to the shared memory. | 1 | 0 | 0 | I am using shared memory (sysv_ipc) between two different process and I want to see the last update time of the shared memory in another code. There are three programs, one writes to the shared memory, another reads from the shared memory, and the third one I need for external error handling, so I like to know if the shared memory is not updated for the last few minutes. With this idea, I tried accessing the attribute "last_attach_time" of the shared memory. It works fine when I ran it in the terminal. That is I created the object for shared memory once in the terminal and then I tried accessing the attribute continuously and it worked completely fine. Until the shared memory was written with data, the "last_attach_time" updated the time, and when writing stopped the output became constant and this is perfectly fine. But when I included in the external error handling code which has a while loop for continuous monitoring, the attribute is not giving correct data. ie, the time is still increasing even after writing to the shared memory is stopped. Has anyone faced similar issues?
Thanks. | Shared Memory sysv_ipc Python | 1.2 | 0 | 0 | 2,188 |
45,862,917 | 2017-08-24T13:26:00.000 | 1 | 0 | 0 | 0 | python,excel,openpyxl | 45,863,816 | 1 | false | 0 | 0 | To be clear: openpyxl does support data validation as covered by the original OOXML specification. However, since then Microsoft has extended the options for data validation and it these that are not supported. You might be able to adjust the data validation so that it is supported. | 1 | 2 | 0 | I have an excel xlsx file that I want to edit using python script.
I know that openpyxl is not able to treat data-validation but I want just to edit the value of some cells containing data-validation and then save the workbook without editing those data-validation.
For now, when I try to do that, I get an error :
UserWarning: Data Validation extension is not supported and will be
removed
and then my saved file doesn't contain anymore the data-validation.
Is there a way to tell openpyxl not to remove the data-validation when saving a workbook even if it can't read it? | openpyxl : data-validation read/write without treatment | 0.197375 | 1 | 0 | 2,941 |
45,863,277 | 2017-08-24T13:43:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,graph-theory | 45,863,649 | 4 | false | 0 | 0 | I guess it depends on how you are going to represent your graph as a data structure.
The two most known graph representations as data structures are:
Adjacency matrices
Adjacency lists
Adjacency matrices
For a graph with |V| vertices, an adjacency matrix is a |V|X|V| matrix of 0s and 1s, where the entry in row i and column j is 1 if and only if the edge (i,j) is in the graph. If you want to indicate an edge weight, put it in the row i column j entry, and reserve a special value (perhaps null) to indicate an absent edge.
With an adjacency matrix, we can find out whether an edge is present in constant time, by just looking up the corresponding entry in the matrix. For example, if the adjacency matrix is named graph, then we can query whether edge (i,j) is in the graph by looking at graph[i][j].
For an undirected graph, the adjacency matrix is symmetric: the row i, column j entry is 1 if and only if the row j, column i entry is 1. For a directed graph, the adjacency matrix need not be symmetric.
Adjacency lists
Representing a graph with adjacency lists combines adjacency matrices with edge lists. For each vertex i, store an array of the vertices adjacent to it. We typically have an array of |V| adjacency lists, one adjacency list per vertex.
Vertex numbers in an adjacency list are not required to appear in any particular order, though it is often convenient to list them in increasing order.
We can get to each vertex's adjacency list in constant time, because we just have to index into an array. To find out whether an edge (i,j) is present in the graph, we go to i's adjacency list in constant time and then look for j in i's adjacency list.
In an undirected graph, vertex j is in vertex i's adjacency list if and only if i is in j's adjacency list. If the graph is weighted, then each item in each adjacency list is either a two-item array or an object, giving the vertex number and the edge weight.
Export to file
How to export the data structure to a text file? Well, that's up to you based on how you would read the text file and import it into the data structure you decided to work with.
If I were to do it, I'd probably try to dump it in the most simple way for later to know how to read and parse it back to the data structure. | 1 | 5 | 0 | My problem involves creating a directed graph, checking if it unique by comparing to a text file containing graphs and if it is unique, appending it to the file. What would be the best representation of graph to be used in that case?
I'm using Python and I'll be using brute-force to check if graphs are isomorphic, since the graphs are small and have some restrictions. | Best way to represent a graph to be stored in a text file | 0 | 0 | 0 | 5,318 |
45,864,595 | 2017-08-24T14:44:00.000 | 1 | 0 | 1 | 1 | python,ubuntu,anaconda,virtualenv,conda | 55,539,151 | 5 | false | 0 | 0 | You can probably get away with copying the whole Anaconda installation to your cloud instance. | 1 | 18 | 0 | I have been using Anaconda(4.3.23) on my GuestOS ubuntu 14.04 which is installed on Vmware on HostOS windows 8.1. I have setup an environment in anaconda and have installed many libraries, some of which were very hectic to install (not straight forward pip installs). few libraries had inner dependencies and had to be build together and from their git source.
Problem
I am going to use Cloud based VM (Azure GPU instance) to use GPU. but I don't want to get into the hectic installation again as i don't want to waste money on the time it will take me to install all the packages and libraries again
Is there any way to transfer/copy my existing env (which has everything already installed) to the Cloud VM? | How to transfer Anaconda env installed on one machine to another? [Both with Ubuntu installed] | 0.039979 | 0 | 0 | 24,187 |
45,865,164 | 2017-08-24T15:10:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook,jupyter | 45,865,241 | 2 | false | 0 | 0 | You can use markdowns. Or you can put in comment your code but it will be not in "jupyter way" | 1 | 3 | 0 | I am using jupyter notebooks to write some explanations on how to use certain functions. The code i am showing there is not complete, meaning it will give errors when executed. Is there a way to write display-only code in a jupyter notebook? | Jupyter notebook display code only | 0 | 0 | 0 | 5,049 |
45,866,292 | 2017-08-24T16:02:00.000 | 0 | 0 | 0 | 0 | python-3.x,machine-learning,data-analysis,catboost | 45,957,039 | 2 | true | 0 | 0 | You can run the algorithm for the maximum number of iterations and then use CatBoost.predict() with ntree_limit parameter or CatBoost.staged_predict() to try different number of iterations. | 1 | 3 | 1 | I want to find optimal parameters for doing classification using Catboost.
I have training data and test data. I want to run the algorithm for say 500 iterations and then make predictions on test data. Next, I want to repeat this for 600 iterations and then 700 iterations and so on. I don't want to start from iteration 0 again. So, is there any way I can do this in Catboost algorithm?
Any help is highly appreciated! | Use the previously trained model for further prediction in catboost | 1.2 | 0 | 0 | 970 |
45,873,077 | 2017-08-25T01:22:00.000 | 0 | 0 | 1 | 0 | python,ubuntu,tensorflow,ipython,anaconda | 45,873,121 | 1 | false | 0 | 0 | Did you notice your anaconda environment? Did you launch your ipython in the same environment as you installed tensorflow? | 1 | 0 | 1 | I just installed tensorflow for Ubuntu 16.04 through Anaconda3. I can run the test tensorflow script with python and have no issues. But when I run it with ipython, it cannot find the modules. How can I link ipython to read the same libraries as the python ones? | Tensorflow install--python can import but ipython cannot | 0 | 0 | 0 | 50 |
45,873,849 | 2017-08-25T03:09:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 45,873,935 | 1 | true | 0 | 0 | In general, x op = expression, is the syntax for shorthand operators. It will be evaluated like x = x op expression.
As said above in comments, "b += True will increment b by 1 and b += False will not change the value of b", It will see if c is bigger than 0, or not. if it is it will increment b by 1 | 1 | 2 | 0 | I understand that an expression like i += 1 usually means i = i+1...
What would the operation below mean?
b += c > 0 | Expression with plus, equal, and greater than symbols in python | 1.2 | 0 | 0 | 594 |
45,878,039 | 2017-08-25T09:09:00.000 | 0 | 0 | 0 | 0 | python,django | 45,879,886 | 1 | false | 1 | 0 | Seems I needed to logout once and log in again in the app for it to work. Thanks. | 1 | 1 | 0 | I just switched from django 1.3.7 to 1.4.22 (on my way to updating to a higher version of django). I am using USE_TZ=True and TIME_ZONE = 'Europe/Bucharest'. The problem that I am encountering is a DateTimeField from DB (postgres) that holds the value 2015-01-08 10:02:03.076+02 (with timezone) is read by my django as 2015-01-08 10:02:03.076000 (without timezone) even thou USE_TZ is True.
Any ideea why this might happen? I am using python 2.7.12 AMD64.
Thanks,
Virgil | Django offset-naive date from DB | 0 | 1 | 0 | 32 |
45,879,771 | 2017-08-25T10:52:00.000 | 3 | 0 | 1 | 1 | python,anaconda,conda | 46,000,463 | 1 | false | 0 | 0 | I posted this to the anaconda github page. Apparently this is an issue with the displayed output but isn't actually an error in the install. The virtual environment installations and updates do work, although they are slower than normal. | 1 | 4 | 0 | I'm installing anaconda python 3, version 4.4.0 on Windows machines. The installs finishes normally. But I'm getting errors when I try to use conda to update or to create virtual environments. Package resolution completes and downloads the packages but then hangs for a long time before throwing out a load of errors like so:
conda create -n py2 python=2.7 anaconda
INFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',
prefix: 'c:\anaconda\envs\py2', env_name: 'py2', mode: 'None', used_mode: 'system'
INFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',
prefix: 'c:\anaconda\envs\py2', env_name: 'py2', mode: 'None', used_mode: 'system'
INFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',
prefix: 'c:\anaconda\envs\py2', env_name: 'py2', mode: 'None', used_mode: 'system'
INFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',
prefix: 'c:\anaconda\envs\py2', env_name: 'py2', mode: 'None', used_mode: 'system'
INFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',
prefix: 'c:\anaconda\envs\py2', env_name: 'py2', mode: 'None', used_mode: 'system'
INFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',
prefix: 'c:\anaconda\envs\py2', env_name: 'py2', mode: 'None', used_mode: 'system'
INFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',
prefix: 'c:\anaconda\envs\py2', env_name: 'py2', mode: 'None', used_mode: 'system'
The environment will still have been created but this hanging and waiting is really becoming a problem. I assume this is a fairly new bug because I've been installing anaconda and using conda for quite a while and never seen this error before. | Anaconda3 conda command error menuinst_win32 | 0.53705 | 0 | 0 | 2,583 |
45,881,542 | 2017-08-25T12:33:00.000 | 1 | 0 | 1 | 1 | python,azure,azure-sql-database | 45,882,593 | 2 | false | 0 | 0 | Get the working PyODBC first. To use PyODBC you should compile it in its 32bits version. Or install Python 2.7 or 3.4 (32-Bit) and type the command "pip install pyodbc"
To use it in Azure WebJob, put the PyODBC.pyd file in the root directory of your job and it should work. | 1 | 1 | 0 | I need to create an Azure webjob that runs a python script which uses pyodbc.
The Azure compiler does not recognize pyodbc.
How do I install it or reference it in some way? | How to install a python library for an Azure webjob? | 0.099668 | 0 | 0 | 962 |
45,881,631 | 2017-08-25T12:38:00.000 | 0 | 0 | 1 | 0 | python,packages,atom-editor | 47,689,694 | 3 | false | 0 | 0 | search for script package and install it.
You can run the code by using ctrl+shift+b
The good thing about script package is you can run any code e.g. filename.c, filename.py etc. | 2 | 0 | 0 | I am new to python, and trying to set up my environment to start python builds. I am using Atom as an editor. What all should I do? Through some online tutorials, I got these recommendations, but I still get some errors when I open up a python project
Installed python
Installed pip
In Atom, installed the following packages:
linter
linter-flake8
linter-ui-default
busy-signal
intentions
I get this error:
Flake8 crashed!
linter-flake8:: Flake8 threw an error related to:
Failed to spawn command flake8. Make sure flake8 is installed and on your PATH
Please check Atom's Console for more details | What packages to install in Atom editor for python | 0 | 0 | 0 | 6,529 |
45,881,631 | 2017-08-25T12:38:00.000 | 0 | 0 | 1 | 0 | python,packages,atom-editor | 45,882,128 | 3 | true | 0 | 0 | Setting up Atom as an IDE might not be in your best interests to learn python.
Python is a scripting language. Pip is a package manager. Atom is a code editor. All you need now is the command line to bring all of them together.
Open your terminal and cd (change directory) to the location where your file is saved. Run it using the command "python file_name.py" on the terminal.
If you have imported a package in your script that isn't installed on your machine, simply execute "pip install package_name" on the terminal.
Happy programming!
PRO Tips :: If you learn to use vi or vim you can remove Atom from your list of tools. Also, might be a good idea to learn about virtualenv to keep your system sane. | 2 | 0 | 0 | I am new to python, and trying to set up my environment to start python builds. I am using Atom as an editor. What all should I do? Through some online tutorials, I got these recommendations, but I still get some errors when I open up a python project
Installed python
Installed pip
In Atom, installed the following packages:
linter
linter-flake8
linter-ui-default
busy-signal
intentions
I get this error:
Flake8 crashed!
linter-flake8:: Flake8 threw an error related to:
Failed to spawn command flake8. Make sure flake8 is installed and on your PATH
Please check Atom's Console for more details | What packages to install in Atom editor for python | 1.2 | 0 | 0 | 6,529 |
45,883,224 | 2017-08-25T14:06:00.000 | 1 | 1 | 0 | 1 | python,linux,bash,scripting,organization | 45,884,636 | 1 | false | 0 | 0 | sync.sh --> syncs the captured photo to some folders, where they are modified for 1. being shown on the second screen, 2. upload to
dropbox and 3. being printed. Also an ever-lasting-while-loop.
terminal-sync.sh --> copies the taken photos to the
second-screen-terminal, where they are shown in a gallery. It's also
an ever-lasting-while-loop.
For these, you can use inotifywait to wait for file availability before processing the file.
You should check using top, which script actually consuming CPU and why. Once you identify the script and why it consume CPU, then you can start finding optimized way to do the same job | 1 | 0 | 0 | I have built a photo booth on a raspberry pi. It works fantastic! But after some coding I now have a problem organizing my scripts. At the moment all scripts are launched via "lxterminal -e". So every script has it's own terminal window and everything runs simultaneously. I ask myself if this can be done in a more efficient way.
The basic function of the photo booth: People press a remote button, take a picture, picture is being shown on the built-in tft.
start.sh --> is being executed automatically after booting. It prepares the system, , sets up the camera and brings it in tethered mode. After all that it launches the other, following scripts:
system-watchdog.sh --> checks continuously if one of the physical buttons on the photo booth is being pressed, to reboot or go into setup mode. It's an ever-lasting-while-loop.
sync.sh --> syncs the captured photo to some folders, where they are modified for beeing printed. Also an ever-lasting-while-loop.
backup.sh --> copies all taken pictures to a usb device as a backup. This is a cronjob, every 5 minutes.
temp-logger.sh --> Logs the temperature of the CPU continuously, because I had heat-problems.
The cpu is running constantly at about 20-40%. Maybe with some optimization I could run on viewer scripts and less cpu usage.
Any suggestions what I could use to organize the scripts in a better way?
Thanks for your suggestions! | Bash / Python: Organization of several scripts | 0.197375 | 0 | 0 | 48 |
45,891,271 | 2017-08-26T02:01:00.000 | 1 | 0 | 0 | 0 | python,neural-network | 45,891,349 | 2 | false | 0 | 0 | I'm not sure if I understand your question well.
Single class training data maybe not exist at all. If you want to detect only the sea cucumber, it is the two-classes classification problem right? It is the sea cucumber or not. Yes or no are two classes.
Yes right cool people implement the NN on raspberry pi. But to some extent it's just possible but not efficient. A good GPU will speed up much for training.
A PC is enable to train some small NN. | 1 | 1 | 1 | I'm totally new with neural networks (NN) in python, and I do not know if NN can run in raspberry pi 3? for I think the problem is that NN requires good CPU/GPU performance for training , data transferring and calculation.
So is it possible to train a NN with single class training data? inorder to save CPU/GPU?.
For example I want the system to detect only the sea cucumber in an image.
A good answer/explanation or link to any example will be very appreciated.
THANKYOU PO | Neural network to detect one class of object only | 0.099668 | 0 | 0 | 1,350 |
45,892,395 | 2017-08-26T06:00:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,python-requests | 45,893,315 | 1 | false | 1 | 0 | I'm having a hard time understanding your need for this type of implementation.
Are you trying to pass a JSON file between sessions? or just any type of file?
It would help to know your idea behind this implementation. | 1 | 0 | 0 | Would it be possible to upload a file and store it in a session and send that file using requests in python ? Can someone get back with a detailed answer | Handling file field values in session | 0 | 0 | 1 | 35 |
45,893,603 | 2017-08-26T08:41:00.000 | 0 | 0 | 0 | 0 | python,django,python-3.x,django-models | 45,894,293 | 1 | false | 1 | 0 | when a user signup to app give there three options end user, service provider, administrator. take these fields in BooleanField. so when the user going to sign in then check the true and false for these three user accounts and whichever condition appeaser as true that will be executed. | 1 | 0 | 0 | I am developing django base web service now and I am confused.
I have three projects as a django projects.
app1 - use from the end user
app2 - use from the service providor
app3 - use from the operator(administrator)
I made one database from app3(migration) and I created symbolic links (models.py, migrations dir) to app1 and app2.
And then I try to use user authentication system of django from my app1, I got a following error.
The above exception (relation "myapp1_user" does not exist LINE 1: ...myapp1_user"."modify_date" FROM ^ ) was the direct cause of the following exception:
I know what is wrong. It's because I don't have myapp1_user table on my database. I only have app3_user table.
But I have no idea how can I configure to work well.
Does somebody have any idea?
Please let me know. | Use django authentication system from multiple apps | 0 | 0 | 0 | 151 |
45,899,505 | 2017-08-26T20:41:00.000 | 0 | 0 | 1 | 1 | python | 45,899,652 | 1 | false | 0 | 0 | So the solution for this would be, setting the environment variable for python by following the steps below:
Click on:- My Computer > Properties > Advanced System Settings > Environment Variables
Then under system variables, create a new Variable called PyPath.
In this variable I have C:\Python27\Lib;C:\Python27\DLLs;C:\Python27\Lib\lib-tk;
This would work for windows | 1 | 0 | 0 | I have been stuck on this for a while. I've read many blogs but still struggling to understand how to set the path in my advanced settings in environment variables so that I can run my scripts in both cmd and python interpreter. | How do I configure the path of my python scripts so I can open it with cmd and python interpreter | 0 | 0 | 0 | 38 |
45,899,846 | 2017-08-26T21:33:00.000 | 0 | 0 | 0 | 0 | python,regression | 45,911,480 | 1 | false | 0 | 0 | I am also a rookie for data analysis and modeling.
If I faced this sort of problems, I might consider some questions like:
Whether there is really a significant linear or generalized linear relationship between independent and dependent variables? Should I pre-process or transfer them before regression?
Whether it is necessary to involve interactions among predictor variables?
How the quality of the data set used to train the model? Whether it is good enough for the true underlying relationship between Factors and Responses?
Should I select a more suitable method to create the prediction model? For example, we usually choose Partial Least Squares Regression (PLS), other than Ordinary Least Squares Regression (OLS), to solve multicollinearity in my work area.
Hope these could be helpful for you. | 1 | 0 | 1 | I want to ask about a multi parameters linear regression model.
The question is as following:
We have now data of 100 companies, and for each company, I have the data for parameter A,B,C,D for 3 seasons.(we can call it A1,A2,A3,B1,B2,B3..etc)
We assume that there is some relationship(which we do not know yet, and need to find)between A and BCD, and now we need to predict A for season 4 which is A4...
My method is calculate the relation using the formula of Ordinary least squares and get a final formula in the form as A4=x1*B4+x2*C4+x3*D4.
I get B4, C4, D4 by simply do linear regression on B,C,D
But the problem is the A4 I get in this way is worse than just do linear regression for A...
Can someone tell me a better solution for the problem?
Thanks | How to build multi parameters linear regression in python | 0 | 0 | 0 | 85 |
45,902,751 | 2017-08-27T07:42:00.000 | 11 | 0 | 0 | 0 | python,amazon-web-services,amazon-ec2,amazon-elastic-beanstalk,amazon-linux | 46,018,192 | 2 | false | 1 | 0 | /opt/python – Root of where you application will end up.
/opt/python/current/app – The current application that is hosted in the environment.
/opt/python/on-deck/app – The app is initially put in on-deck and then, after all the deployment is complete, it will be moved to current. If you are getting failures in yourcontainer_commands, check out out the on-deck folder and not the current folder.
/opt/python/current/env – All the env variables that eb will set up for you. If you are trying to reproduce an error, you may first need to source /opt/python/current/env to get things set up as they would be when eb deploy is running.
opt/python/run/venv – The virtual env used by your application; you will also need to run source /opt/python/run/venv/bin/activate if you are trying to reproduce an error | 1 | 9 | 0 | I just deployed a flask-python app with elastic beanstalk on AWS but cannot locate my app source files like application.py or templates/index.html etc
I've looked at looked at /var/../.. or /opt/../.. etc but nowhere to be found.
Is there an ebs command like $ eb find 'filename.py' etc? | Where is my python-flask app source stored on ec2 instance deployed with elastic beanstalk? | 1 | 0 | 0 | 2,438 |
45,902,890 | 2017-08-27T08:03:00.000 | -1 | 0 | 1 | 0 | python,debugging,pycharm,pydev,breakpoints | 45,904,346 | 2 | false | 0 | 0 | Take a look at Eric Python IDE and VSC(Visual Studio Code) | 1 | 3 | 0 | I am looking for the following (IMHO, very important) feature:
Suppose I have two functions fa() and fb(), both of them has a breakpoint.
I am now stopped in the breakpoint in fa function.
In the interactive debugger console I am calling fb().
I want to stop in fb breakpoint, but, unfortunately pb() runs but ignores the breakpoint.
someone in another SO thread called it "nested breakpoints".
I am a developer that come from Matlab, in Matlab no matter how a function is called, from console, from debugger. if it has a breakpoint it stops.
I read past threads about this subject and did not find any solution.
I also tried latest pycharm community and latest pydev and no luck.
I also read that visual studio can not make it.
Is this inherent in Python and technically can not be done?
Is there a technique / another IDE that supports it? | any Python IDE supports stopping in breakpoint from the debugger | -0.099668 | 0 | 0 | 846 |
45,905,665 | 2017-08-27T13:50:00.000 | 1 | 0 | 0 | 0 | python,tkinter | 52,132,169 | 3 | false | 0 | 1 | You can use destroy method for each widget for example if it's a button you write btn1.destroy() and do it for all widgets. The forget method isn't recommended for it only remove the widgets from appearance. | 1 | 3 | 0 | I'm trying to clear a tkinter window completely. However, I need a way to clear every widget on the window all at once without using pack.forget(). | Is there a way to clear all widgets from a tkinter window in one go without referencing them all directly? | 0.066568 | 0 | 0 | 28,747 |
45,907,492 | 2017-08-27T17:23:00.000 | 2 | 0 | 1 | 0 | python,function,call-graph | 45,907,510 | 2 | false | 0 | 0 | You should begin from the main function of the program and at the first layer link all functions that are called from the main this would provide a start point and then you can link all the functions below it. | 1 | 5 | 0 | I've been given a Python code, together with the modules it imports. I would like to build a tree indicating which function calls what other functions. How can I do that? | Building a call tree for a Python code | 0.197375 | 0 | 0 | 4,214 |
45,907,545 | 2017-08-27T17:30:00.000 | 0 | 0 | 0 | 0 | python,django,slug | 45,909,584 | 1 | false | 1 | 0 | Making slug unique ensures that it's unique in database so you can rely on it with no worries in your python/sql code. So it does make sense and that is best practice in general.
Concerning IntegrityError you need to provide more details. I'm using such fields with no problems. | 1 | 0 | 0 | im using slugfield in django model's and i've set slugfield to be unique for a reason so that my post should be unique, my slug include's a combination of title and it's object primary key
my slugfield's are generating a slug's as an example like:
slug url : whats-your-favourite-character-from-the-defenders-1
and the number at the end of my slug represent's primary key ' as you see primary key in my slug which already make's an url unique , so does it make sense that i should use unique attribute in slugfield !
because my problem is when i update my existed object it throws an error
IntegrityError: UNIQUE constraint failed: polls_question.id | Does it required a slugfield's must be unique in django? | 0 | 0 | 0 | 168 |
45,907,643 | 2017-08-27T17:39:00.000 | 0 | 0 | 0 | 0 | python,django | 49,237,591 | 1 | true | 1 | 0 | Placing additional python files alongside your settings.py should not create migration errors in the normal case. Are you sure these files are causing the errors?
To you question:
If you will use those functions inside a single app, you could create a subdirectory inside that app, for example utils; it should be on the same level as the migrations and templates directories of the app.
However, if you will be using those functions inside many apps you could create a new app without models and migrations, lets call it custom_utils, in which you can place all your functions. | 1 | 1 | 0 | I have rather vast implementation of functionality needed for processing specific type of data stored in my Django DB (7 interconnected files containing ~100 lines of code each). There's a facade method in one of those files that I call from related method in views.py so that there is no mess inside of views file.
For now I put everything inside a new folder in the same directory where settings.py urls.py and wsgi.py are, but now every time I'm calling makemigrations I receive a handful of unrelated information that has nothing to do with changes in data model.
What is the best place to store those routines? | Where to put routines in Django project? | 1.2 | 0 | 0 | 201 |
45,911,894 | 2017-08-28T04:43:00.000 | 0 | 0 | 1 | 1 | python,windows,background-process | 60,206,875 | 2 | false | 0 | 0 | You can run the file using pythonw instead of python means run the command pythonw myscript.py instead of python myscript.py | 1 | 7 | 0 | I'm fairly new to Python and I have a python script that I would like to ultimately convert to a Windows executable (which I already know how to do). Is there a way I can write something in the script that would make it run as a background process in Windows instead of being visible in the foreground? | How to put a Python script in the background without pythonw.exe? | 0 | 0 | 0 | 3,312 |
45,912,973 | 2017-08-28T06:35:00.000 | 0 | 0 | 0 | 0 | python,pyserial,data-acquisition | 45,913,039 | 2 | false | 0 | 0 | if this a "slow" process, that does not accurate time precision, use a while loop and time.sleep(2) to timeout the process for 2 seconds. | 1 | 0 | 0 | I am writing an application in python to acquire data using a serial communication. I use the pyserial library for establishing the communication. What is the best approach to request data in an interval (eg every 2 seconds). I always have to the send a request, wait for the answer and start the process again. | pyserial time based repeated data request | 0 | 0 | 1 | 241 |
45,913,233 | 2017-08-28T06:52:00.000 | 0 | 0 | 0 | 0 | python,lightgbm | 45,913,568 | 1 | false | 0 | 0 | This is a bug in LightGBM; 2.0.4 doesn't have this issue. It should be also fixed in LightGBM master. So either downgrade to 2.0.4, wait for a next release, or use LightGBM master.
The problem indeed depends on training data; feature_importances segfault only when there are "constant" trees in the trained ensemble, i.e. trees with a single leaf, without any splits. | 1 | 1 | 1 | I am using LightGBM 2.0.6 Python API. My training data has around 80K samples and 400 features, and I am training a model with ~2000 iterations, and the model is for multi-class classification (#classes = 10). When the model is trained, and when I called model.feature_importance(), I encountered segmentation fault.
I tried to generate artificial data to test (with the same number of samples, classes, iterations and hyperparameters), and I can successfully obtain the list of feature importance. Therefore I suspect whether the problem occurs depends on the training data.
I would like to see if someone else has encountered this problem and if so how was it overcome. Thank you. | Encountered segmentation fault when calling feature_importance in LightGBM python API | 0 | 0 | 0 | 639 |
45,913,845 | 2017-08-28T07:30:00.000 | 0 | 1 | 0 | 1 | python,raspberry-pi3 | 45,914,010 | 1 | false | 0 | 0 | Before installing any new update in your Raspberry, check first provided drivers for your devices. Otherwise keep a copy of your drivers and reinstall them after update and upgrade. | 1 | 0 | 0 | So I want to install opencv in my raspberry pi 3b. When I sudo update, upgrade and finally reboot my rasp pi, I noticed that my LCD touch is now disabled. Good thing I have a back-up of the OS to make the LCD touch enabled again. How will I avoid this? | LCD not working after sudo update and upgrade | 0 | 0 | 0 | 36 |
45,919,988 | 2017-08-28T13:15:00.000 | 0 | 0 | 1 | 0 | python,windows,compilation,executable,nuitka | 68,241,661 | 2 | false | 0 | 0 | Have you tried --onefile option with nuitka?? This worked for me on linux. | 2 | 6 | 0 | As title says, can I create single file executable with nuitka? I tried --portable and --standalone option but they does not seem to work. And can anyone please explain me what is the --recurse-all option? And if you have some other recommendations please tell me. I dont want to use pyinstaller because its too slow to start my app. Thanks for any response. | Can I create single file executable with nuitka? | 0 | 0 | 0 | 5,719 |
45,919,988 | 2017-08-28T13:15:00.000 | 3 | 0 | 1 | 0 | python,windows,compilation,executable,nuitka | 46,055,325 | 2 | false | 0 | 0 | This seems to work on my side with Qt bindings:
Nuitka-0.5.27/bin/nuitka --standalone --recurse-all --recurse-on --recurse-directory --show-progress --show-modules --plugin-enable=qt-plugins --python-version=2.7 --remove-output --output-dir=nuitka-build main.py
You will end up with "main.dist" directory with all the dependencies and the binary "main.exe". | 2 | 6 | 0 | As title says, can I create single file executable with nuitka? I tried --portable and --standalone option but they does not seem to work. And can anyone please explain me what is the --recurse-all option? And if you have some other recommendations please tell me. I dont want to use pyinstaller because its too slow to start my app. Thanks for any response. | Can I create single file executable with nuitka? | 0.291313 | 0 | 0 | 5,719 |
45,923,490 | 2017-08-28T16:26:00.000 | 0 | 1 | 0 | 0 | python,twitter,web-scraping,text-mining,scrape | 45,959,340 | 1 | false | 1 | 0 | Hardly so, and even if you manage to somehow do it, you'll most likely get blacklisted. Also, please read the community guidelines when it comes to posting questions. | 1 | 0 | 0 | i am student and i am totally new to scraping etc, today my supervisor gave me task to get the list of followers of a user or page(celebrity etc)
the list should contain information about every user (i.e user name, screen name etc)
After a long search i found that i can't get the age and gender of any user on twitter.
secondly i got help regarding getting list of my followers but i couldnt find help about "how i can get user list of public account"
kindly suggest me that its possible or not, and if it is possible, what are the ways to get to my goals
thank you in advance | is it possible to scrape list of followers of a public twitter acount (page) | 0 | 0 | 1 | 279 |
45,924,742 | 2017-08-28T17:50:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,deep-learning,keras,pre-trained-model | 45,928,049 | 2 | false | 1 | 0 | The issue has been resolved.
I used the model.add()function and then added all the required layers of both Model 1 and Model 2.
The following code would add the first 10 layers of Model 2 just after the Model 1.
for i in model2.layers[:10]:
model.add(i) | 1 | 1 | 1 | How can we join/combine two models in Transfer Leaning in KERAS?
I have two models:
model 1 = My Model
model 2 = Trained Model
I can combine these models by putting the model 2 as input and then passed its output to the model 1, which is the conventional way.
However, I am doing it in other way. I want to put the model 1 as input and then passed its output to the model 2 (i.e. trained model one). | Joining/Combining two models for Transfer Leaning in KERAS | 0 | 0 | 0 | 2,275 |
45,925,893 | 2017-08-28T19:13:00.000 | 0 | 0 | 1 | 0 | packages,jupyter-notebook,ipython-notebook,nameerror,canopy | 45,926,341 | 1 | true | 0 | 0 | What is your OS? Exactly what command are you entering, and from what kind of console? Please include an exact transcript (input and output) of your session. Also a link to the instructions that you are trying to follow.
For example, if you want to start a notebook from the command line, go to Canopy's tools menu, open a "Canopy Terminal / Command Prompt", and type jupyter notebook. Or, skip the command line and just use Canopy's File menu => New => Notebook. | 1 | 0 | 0 | I am running on Canopy version 2.1.3 and I keep getting a syntax error. I have tried using ipython-notebook as well as jupyter-notebook and both times I get back that ipython and jupyter are not defined. I've double checked and it looks like I have all the packages installed in order to use notebook and everything is up to date. | Error name not defined while trying to load Jupyter notebook from canopy version 2.1.3 | 1.2 | 0 | 0 | 75 |
45,927,259 | 2017-08-28T20:56:00.000 | 2 | 0 | 0 | 1 | google-cloud-platform,google-cloud-sdk,google-cloud-python | 56,367,158 | 6 | false | 0 | 0 | I just spent hours trying to make the installer run trying to edit ca cert files but the installer keeps wiping the directories as part of the installation process. In order to make the bundle gcloud sdk installer work, I ended up having to create an environment variable SSL_CERT_FILE and setting the path to a ca cert text file that contained the Google CAs + my company's proxy CA cert. Then the installer ran without issue. It seems that env variable is used by the python http client for CA validation.
Then you need to run gcloud config set custom_ca_certs_file before running gcloud init | 1 | 2 | 0 | Trying to install Google Cloud SDK(Python) on Windows 10 for All Users. Getting the following error.
This is new machine and start building fresh. Installed python 2.7 version prior to this.
Please help me to resolve this.
Output folder: C:\Program Files (x86)\Google\Cloud SDK Downloading
Google Cloud SDK core. Extracting Google Cloud SDK core. Create Google
Cloud SDK bat file: C:\Program Files (x86)\Google\Cloud
SDK\cloud_env.bat Installing components. Welcome to the Google Cloud
SDK! This will install all the core command line tools necessary for
working with the Google Cloud Platform. Traceback (most recent call
last): File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\bin\bootstrapping\install.py", line 214, in
main() File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\bin\bootstrapping\install.py", line 192, in main
Install(pargs.override_components, pargs.additional_components) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\bin\bootstrapping\install.py", line 134, in
Install
InstallOrUpdateComponents(to_install, update=update) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\bin\bootstrapping\install.py", line 177, in
InstallOrUpdateComponents
['--quiet', 'components', verb, '--allow-no-backup'] + component_ids) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\cli.py", line 813, in
Execute
self._HandleAllErrors(exc, command_path_string, specified_arg_names) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\cli.py", line 787, in
Execute
resources = args.calliope_command.Run(cli=self, args=args) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\backend.py", line
754, in Run
resources = command_instance.Run(args) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\surface\components\update.py", line 99, in
Run
version=args.version) File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\update_manager.py",
line 850, in Update
command_path='components.update') File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\update_manager.py",
line 591, in _GetStateAndDiff
command_path=command_path) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\update_manager.py",
line 574, in _GetLatestSnapshot
*effective_url.split(','), command_path=command_path) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\snapshots.py",
line 165, in FromURLs
for url in urls] File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\snapshots.py",
line 186, in _DictFromURL
response = installers.ComponentInstaller.MakeRequest(url, command_path) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\installers.py",
line 285, in MakeRequest
return ComponentInstaller._RawRequest(req, timeout=timeout) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\installers.py",
line 329, in _RawRequest
should_retry_if=RetryIf, sleep_ms=500) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\util\retry.py", line 155,
in TryFunc
return func(*args, kwargs), None File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\url_opener.py", line 73,
in urlopen
return opener.open(req, data, timeout) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\urllib2.py",
line 429, in open
response = self._open(req, data) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\urllib2.py",
line 447, in _open
'_open', req) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\urllib2.py",
line 407, in _call_chain
result = func(*args) File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\core\url_opener.py", line 58,
in https_open
return self.do_open(build, req) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\urllib2.py",
line 1195, in do_open
h.request(req.get_method(), req.get_selector(), req.data, headers) File
"c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\httplib.py",
line 1042, in request
self._send_request(method, url, body, headers) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\httplib.py",
line 1082, in _send_request
self.endheaders(body) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\httplib.py",
line 1038, in endheaders
self._send_output(message_body) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\httplib.py",
line 882, in _send_output
self.send(msg) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\httplib.py",
line 844, in send
self.connect() File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\lib\third_party\httplib2__init__.py", line 1081,
in connect
raise SSLHandshakeError(e)
**httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661) Failed to install. | google cloud python sdk installation error - SSL Certification Error | 0.066568 | 0 | 1 | 13,684 |
45,928,055 | 2017-08-28T22:14:00.000 | 2 | 0 | 1 | 0 | python | 45,928,083 | 1 | true | 0 | 0 | This is pretty broad, but your guiding consideration should be that you need to provide hashable keys. Given your constraints, I think a tuple (or namedtuple) would fit well. Namedtuple operates like a record, or lightweight class, so you can get the benefits of calling with dot notation while having an immutable data structure. | 1 | 1 | 0 | I'm relatively new to coding in python. I know most of the syntax, but actually applying it within real programs is new to me. I wanted to know, if I have some item with multiple properties, how would I store that?
I am creating a Japanese learning tool, following a textbook, and I need a way to store, and later access the vocabulary. For example...
If I have the word おはよう, this in romanized type is "ohayou", and its definition is "Good Morning", also this vocab is located in "Lesson 1" of the book.
I was thinking of creating a dictionary, with maybe a tuple/array/list for the value, or key to store more properties per vocab word. Then I thought maybe I could use a class as well, but thought I would need a class for each vocab word as objects? I just want to know what would be the most efficient, and easy storage method for these vocab words and all their different English, and Japanese properties. | Python: What storage method to use for multi-property dictionary? | 1.2 | 0 | 0 | 66 |
45,928,987 | 2017-08-29T00:20:00.000 | 4 | 0 | 1 | 0 | python,ms-access,odbc,32bit-64bit,pyodbc | 45,929,130 | 3 | false | 0 | 0 | Unfortunately, you need 32-bit Python to talk to 32-bit MS Access. However, you should be able to install a 32-bit version of Python alongside 64-bit Python. Assuming you are using Windows, during a custom install you can pick the destination path. Then use a virtualenv. For example, if you install to C:\Python36-32:
virtualenv --python=C:\Python36-32\bin\python.exe
Good luck! | 1 | 10 | 0 | I am using 64-bit python anaconda v4.4 which runs python v3. I have MS Access 2016 32-bit version. I would like to use pyodbc to get python to talk to Access. Is it possible to use 64-bit pyodbc to talk to a MS Access 2016 32-bit database?
I already have a number of python applications running with the 64-bit python anaconda. It will be a chore to downgrade to 32-bit python. | Is it possible for 64-bit pyodbc to talk to 32-bit MS access database? | 0.26052 | 1 | 0 | 15,479 |
45,929,449 | 2017-08-29T01:34:00.000 | 0 | 0 | 1 | 0 | python | 45,929,506 | 4 | false | 0 | 0 | I think python code can be compiled to some extent but we are unable to compile everything in python before hand. This is due to the loosely typed style of python where you can change variable type anywhere in the program. A modification of Python namely Rpython has much strict style and hence can be compiled completely. | 3 | 9 | 0 | I understand that Python is an interpreted language, but the performance would be much higher if it was compiled.
What exactly is preventing python from being compiled?
Why was python designed as an interpreted language and not a compiled one in the first place?
Note: I know about .pyc files, but those are bytecode, not compiled files. | What's preventing python from being compiled? | 0 | 0 | 0 | 6,609 |
45,929,449 | 2017-08-29T01:34:00.000 | 0 | 0 | 1 | 0 | python | 45,929,716 | 4 | false | 0 | 0 | Python is language which primarily built for writing readable and expressive code.
Python wraps many features from all it's neighbours.
Let's see why we don't need python code to be compiled into assembly or machine.
Now let's compare a native language with python. Let's take C++.
There are some python specific features like you don't have to do any type declaration in python. This managed by Python interpreter. But if you try to implement same feature in C++ it's a burden over compiler. It will add code to check a variable's type every time before it's getting accessed for any purpose. Even python compiler does the same operation. It means that you are not improving runtime performance at all.
And most of the python functions are c functions which python compiler internally calling when we call it in python script.
The primary reason we don't need a python compiler is that it doesn't improve performance in a large scale. It's waste to write a software which increases risk than reducing it. And python is damn fast once all of it's code in main memory. | 3 | 9 | 0 | I understand that Python is an interpreted language, but the performance would be much higher if it was compiled.
What exactly is preventing python from being compiled?
Why was python designed as an interpreted language and not a compiled one in the first place?
Note: I know about .pyc files, but those are bytecode, not compiled files. | What's preventing python from being compiled? | 0 | 0 | 0 | 6,609 |
45,929,449 | 2017-08-29T01:34:00.000 | 0 | 0 | 1 | 0 | python | 45,929,500 | 4 | false | 0 | 0 | Python is a scripting language, often used for things like rapid prototyping or fast development, so I guess the thought process behind interpretor over compiler is that it simplifies things for the programmer in those domains (at the cost of performance). Nothing is stopping you or others from writing compilers for Python however; Facebook did something like this for PHP when they wrote HHVM to execute the bytecode of compiled Hack (their typed variant of PHP).
In fact, there are project(s) out there that do just that with python. Cython is one example I can think of off the top of my head (cython.org). | 3 | 9 | 0 | I understand that Python is an interpreted language, but the performance would be much higher if it was compiled.
What exactly is preventing python from being compiled?
Why was python designed as an interpreted language and not a compiled one in the first place?
Note: I know about .pyc files, but those are bytecode, not compiled files. | What's preventing python from being compiled? | 0 | 0 | 0 | 6,609 |
45,930,229 | 2017-08-29T03:26:00.000 | 0 | 0 | 0 | 0 | python,html,css,line,newline | 56,186,862 | 4 | false | 1 | 0 | Try using these options.
If your content-type is html, then use
"String to be displayed"+"<br \>"+"The string to be displayed in new line"
Else, If your content-type is plain text then use
"String to be displayed"+"\n"+"The string to be displayed in new line" | 2 | 1 | 0 | Basically I'm building a chatbot using python. When running on python, I can display the answer with multiple lines by using \n tag. However , when I bring it to HTML to display it on website using Flask, it cannot render \n tag so there is no line break.
I have also tried to replace \n to <br/> but no help. It prints out the br tag instead of converting it to a line break.
Please guide. | How to break line on html when use string from python | 0 | 0 | 0 | 2,578 |
45,930,229 | 2017-08-29T03:26:00.000 | 0 | 0 | 0 | 0 | python,html,css,line,newline | 45,930,326 | 4 | false | 1 | 0 | in some textArea <br/> will not word
you can use to brekline
just like use to stand for space in html | 2 | 1 | 0 | Basically I'm building a chatbot using python. When running on python, I can display the answer with multiple lines by using \n tag. However , when I bring it to HTML to display it on website using Flask, it cannot render \n tag so there is no line break.
I have also tried to replace \n to <br/> but no help. It prints out the br tag instead of converting it to a line break.
Please guide. | How to break line on html when use string from python | 0 | 0 | 0 | 2,578 |
45,931,604 | 2017-08-29T05:55:00.000 | -1 | 1 | 1 | 1 | python,windows,security,server | 45,931,678 | 3 | false | 0 | 0 | There is no way to make them not readable but executable at the same time. | 2 | 0 | 0 | I have a bunch of Python scripts which are to be run by several different users. The scripts are placed in a Windows server environment.
What I wish to achieve is to protect these scripts in such a way that standard users are allowed to run them but do not have the rights to read/modify/move them.
Is this even possible and if so what is the optimal strategy?
Thanks in advance. | Protect python script on Windows server | -0.066568 | 0 | 0 | 381 |
45,931,604 | 2017-08-29T05:55:00.000 | 1 | 1 | 1 | 1 | python,windows,security,server | 45,932,177 | 3 | false | 0 | 0 | You can compile Python modules into native libraries with Cython and provide compiled files only; though it involves a lot of hassle and doesn't work in some cases. They still can be decompiled to C code but it will be mostly unreadable.
Pros: 1. compiled libraries can be imported as normal Python modules.
Cons: 1. requires additional setup; 2. doesn't work in some cases, e.g. celery tasks cannot reside in compiled modules because: 3. you lose introspection abilities; 4. tracebacks are basically unreadable. | 2 | 0 | 0 | I have a bunch of Python scripts which are to be run by several different users. The scripts are placed in a Windows server environment.
What I wish to achieve is to protect these scripts in such a way that standard users are allowed to run them but do not have the rights to read/modify/move them.
Is this even possible and if so what is the optimal strategy?
Thanks in advance. | Protect python script on Windows server | 0.066568 | 0 | 0 | 381 |
45,931,835 | 2017-08-29T06:11:00.000 | 1 | 0 | 0 | 0 | python,vector,machine-learning,k-means,dbscan | 46,190,984 | 1 | false | 0 | 0 | Your main problem is how to measure similarity.
I'm surprised you got the algorithms to run at all, because usually they would expect all vectors to have exactly the same length for computing distances. Maybe you had them automatically filled up with 0 values - and that is likely why the long vectors end up being very far away from all others.
Don't use the algorithms as black boxes
You need to understand what they are doing or the result will likely be useless. In your case, they are using a bad distance, so of course the result can't be very good.
So first, you'll need to find a better way of computing the distance of two points with different length. How similar should [0,1,2,1,0] and [30,40,50,60,50,40,30] be. To me, this is a highly similar pattern (ramp up, ramp down). | 1 | 0 | 1 | I'm pretty new to ML and Datascience, so my question may be a little silly.
I have a dataset, each row is a vector [a1,a2,a3,a3,...,an]. Those vectors are different not only in their measurements but also in number of n and the sum A = a1 + a2 + a3 +...+ an.
Most of the vectors have 5-6 dimensions, with some exception at 15-20 dimensions. On average, their components often have value of 40-50.
I have tried Kmeans, DBSCAN and GMM to cluster them:
Kmeans overall gives the best result, however, for vectors with 2-3 dimensions and vectors with low A, it often misclassifies.
DBSCAN can only separate vector with low dimension and low A from the dataset, the rest it treats as noise.
GMM separates the vectors with 5-10 dimension, low A, very good, but performs poorly on the rest.
Now I want to include the information of n and A into the process. Example:
-Vector 1 [0,1,2,1,0] and Vector 2 [0,2,4,5,3,2,1,0], they are differents in both n and A, they can't be in the same cluster. Each cluster only contains vectors with similar(close value) A and n, before taking their components into account.
I'm using sklearn on Python, I'm glad to hear suggestion and advice on this problem. | Add a criteria to the dataset in clustering | 0.197375 | 0 | 0 | 91 |
45,932,262 | 2017-08-29T06:35:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,django-rest-framework | 45,932,377 | 2 | false | 1 | 0 | change your TIME_ZONE in settings.py
TIME_ZONE = 'Asia/Kolkata'
and then try datetime.now() it will give time in IST | 1 | 1 | 0 | I am running python script using supervisor. Script has function timezone.now(). This always returns date when I started supervisor.
I am not able to find out why timezone giving the wrong date.
Can anyone help? | Django timezone.now() not returning current datetime when script running using supervisor | 0 | 0 | 0 | 485 |
45,933,457 | 2017-08-29T07:46:00.000 | 1 | 0 | 0 | 0 | python,ios,swift,django,uitableview | 45,933,635 | 3 | false | 1 | 0 | I think best approach will be pagination. Fetch like 10 or 20 data in the beginning and save it in an array and load tableview. When user starts scrolling the tableview in method willDisplayCell increase page count and fetch next 10 or 20 data from server and append it in the array. | 1 | 0 | 0 | I am using Django as my backend and inside my iOS application , I am fetching data through API's. I have a UITableView where I am fetching a list of places which also have a rating value with them. After implementing the refresh control I noticed that whenever I used to refresh, it used to append the new data in the array and created duplicity. I thought of a very simple solution which is that we can actually empty our array when refreshing so that we can avoid duplicity. Is it the most efficient way to do this? I am concerned because right now I have 3 places in my backend which I am trying to fetch and what about when there will be hundreds , will we have any performance impact ? | Refreshing Table View Efficiently | 0.066568 | 0 | 0 | 104 |
45,933,867 | 2017-08-29T08:08:00.000 | 0 | 1 | 1 | 0 | python,anaconda,libstdc++ | 45,933,868 | 1 | false | 0 | 0 | The problem comes from anaconda 4.2.0 environment libstdc++, run
strings ANACONDA_HOME/bin/../lib/libstdc++.so.6 | grep GLIBCXX
You might see following output
GLIBCXX_3.4
GLIBCXX_3.4.1
GLIBCXX_3.4.2
GLIBCXX_3.4.3
GLIBCXX_3.4.4
GLIBCXX_3.4.5
GLIBCXX_3.4.6
GLIBCXX_3.4.7
GLIBCXX_3.4.8
GLIBCXX_3.4.9
GLIBCXX_3.4.10
GLIBCXX_3.4.11
GLIBCXX_3.4.12
GLIBCXX_3.4.13
GLIBCXX_3.4.14
GLIBCXX_3.4.15
GLIBCXX_3.4.16
GLIBCXX_3.4.17
GLIBCXX_3.4.18
GLIBCXX_3.4.19
GLIBCXX_FORCE_NEW
GLIBCXX_DEBUG_MESSAGE_LENGTH
As you see there is no GLIBCXX_3.4.20 in anaconda 4.2.0 libstdc++
Now run the following commands to solve the problem:
cd ANCONDA_HOME/lib
rm libstdc++.so.6.0.19
ln -s /usr/lib/x86_64-linux-gnu/libstdc++.so.6 libstdc++.so.6.0.19
Hopefully this should solve the problem. | 1 | 0 | 0 | pyndri installed successfully, but when I am importing it, I get the following error:
undefined symbol:_ZTVNSt7__cxx1119basic_istringstreamIcSt11char_traits.... | undefined symbol: _ZTVNSt7__cxx1119basic_istringstreamIcSt11char_traitsIc.. while importing pyndri | 0 | 0 | 0 | 1,106 |
45,934,277 | 2017-08-29T08:31:00.000 | 0 | 1 | 0 | 1 | python,web-scraping,debian,virtualbox | 45,934,419 | 1 | false | 0 | 0 | First check the parameters of your virtual machine you might have given it much more RAM or processors than you have (or not enough).
If this is not the case close everything in the VM and only start the script.
These errors generally say that you don't have resources to perform the operation.
Check if your syntax is ok and if you are using the same version of python on both systems.
Note that the VM is a guest system and can't have as much resources as your main OS because the main Os will die in some circumstances. | 1 | 1 | 0 | I have got a Windows7 system, and I installed on it a Virtual Box 5.1.26.
On this virtual box, I installed a Debian64 - Linux server. (I think I configured it correctly, it is getting enough memory).
When I want run a Python script on it (which is a web-scraping script, it process around 1000 pages and take it into database), i get always the same error message after a few minutes :
Unable to allocate and lock memory. The virtual machine will be paused. Please close applications to free up memory or close the VM.
Or something error message with : run out of time (when it want to load a website)
In the windows7 system my script is working without any problem, so I am a little bit confused now, what is the problem here? | How to run a python script successfully with a debian system on the VirtualBox? | 0 | 0 | 0 | 192 |
45,934,315 | 2017-08-29T08:34:00.000 | 6 | 0 | 1 | 0 | python,garbage-collection,dispose,idisposable | 51,951,188 | 1 | false | 0 | 0 | Python has:
try ... finally, which is equivalent to the similar construction in C#;
the with statement, which is an analogue of using(... = new ...()) in C#.
Note that, unlike C#, with statement accepts already constructed object, calls __enter__ when entering and __exit__ when exiting. I.e., the object is initialized in __init__, the resource is acquired in __enter__ and disposed of in __exit__. Therefore, such an object can be used multiple times.
With contextlib.closing, it's possible to get closer to C#. Acquire the resource in __init__ and dispose of it in the close method. contextlib.closing makes a wrapper which calls close in its __exit__.
In your case, you should make all preparations in __init__, acquire the actual handle in __enter__, and dispose of it in __exit__. | 1 | 3 | 0 | I have to create a python-wrapper for a C API. I have used ctypes to call into C dlls from python. I am able to create Handle and use it from python. I am looking for Dispose pattern in Python similar to that of C#. Does there exist a Dispose pattern in python? | Is there a Dispose Pattern in Python? | 1 | 0 | 0 | 1,541 |
45,935,059 | 2017-08-29T09:12:00.000 | 0 | 0 | 0 | 0 | python,selenium,selenium-webdriver,web-scraping,selenium-chromedriver | 45,935,491 | 1 | false | 0 | 0 | if you build a robust scrapy system,you must think about exception handling . i have done some large scrape project .in these projects we will use a log system to record the exception ,then retry when don't success scraping the message. | 1 | 0 | 0 | I have a Python + Selenium script that helps me scrape information. However, the webpage encounters an error from time to time and then I need to refresh the page and scrape again. The problem is that the error is erratic and it might crash my scraper when I already clicked some buttons or filled some forms.
I need to find an elegant method to refresh the page exactly with the same buttons clicked (I mean, exactly to the same state). Any help? | How to make Selenium refresh page to last state of its elements? | 0 | 0 | 1 | 289 |
45,935,428 | 2017-08-29T09:28:00.000 | 1 | 0 | 0 | 1 | python,airflow,flask-login,apache-airflow | 46,274,739 | 1 | true | 0 | 0 | You can get it by calling {{ current_user.user.username }} or {{ current_user.user }} in your html jenja template. | 1 | 0 | 0 | Does anyone know how I'll be able to get the current user from airflow? We have our backend enabled to airflow/contrib/auth/backends/ldap_auth.pyand so users log in via that authentication and I want to know how to get the current user that clicks on something (a custom view we have as a plugin). | Airflow: Get user logged in with ldap | 1.2 | 0 | 0 | 1,335 |
45,935,606 | 2017-08-29T09:36:00.000 | 0 | 0 | 1 | 0 | python,matplotlib | 45,935,693 | 1 | false | 0 | 0 | If you change the command window to white you should be able to see the text. If you are using the command %matplotlib inline, together with a black command window, the text will not always be visible. | 1 | 0 | 1 | I have created some subplots using matplotlib librairies (pyplot and gridspec). I am trying to put some text in front of the graphs, but sometimes they are located below, in the background so I can't see them.
I don't know if I should used plt.text or annonate or rather use methods of sublplots? | Matplolib, put text in front of graphs | 0 | 0 | 0 | 207 |
45,939,572 | 2017-08-29T12:46:00.000 | 0 | 0 | 1 | 0 | python,variables,integer | 45,939,739 | 3 | false | 0 | 0 | You cannot specify a precision directly. When a python int grows too large they will automatically be converted to a python long. You can initialize a python long by appending an 'l' or 'L'. Note however that this is only possible in python 2.
For example:
long_int = 398593849843l
another = 13L | 1 | 2 | 0 | I would like to know whether it be possible to set a length for a variable in python like in C: short, long, etc. | How can I set a length (for example "short" or "long") to a int variable? | 0 | 0 | 0 | 327 |
45,942,883 | 2017-08-29T15:22:00.000 | 2 | 1 | 0 | 1 | python,caffe,layer | 45,944,107 | 1 | true | 0 | 0 | Your python layer has two parameters in the prototxt: layer: where you define the python class name implementing your layer, and moduule: where you define the .py file name where the layer class is implemented.
When you run caffe (either from command line or via python interface) you need to make sure your module is in the PYTHONPATH | 1 | 1 | 0 | if you are using a custom python layer - and assuming you wrote the class correctly in python - let's say the name of the class is "my_ugly_custom_layer"; and you execute caffe in the linux command line interface,
how do you make sure that caffe knows how to find the file where you wrote the class for your layer? do you just place the .py file in the same directory as the train.prototxt?
or
if you wrote a custom class in python you need to use the python wrapper interface? | Bekeley caffe command line interface | 1.2 | 0 | 0 | 263 |
45,944,697 | 2017-08-29T17:07:00.000 | 0 | 0 | 0 | 0 | python,google-cloud-datalab | 46,005,947 | 1 | false | 0 | 0 | I was loading the data inappropriately. I was using pandas load_csv on my local machine, and BytesIO on in Datalab. The comma in the numberical value was throwing off the import of the data. I had to say that the delimiter is a "," and the thousand separator is also a "," | 1 | 0 | 1 | I have a python script that runs perfectly in my IDE on my local machine, but when I run it on Google Datalab, it throws this error:
ValueError: could not convert string to float: '80,354'
The code is simple, and the graph prints in my Pycharm IDE, but not on GoogleDatalab.
plt.plot(new_df['Volume'])
plt.show()
The error is related to the last line in the data. I'm using the date as an index. Here's what the data looks like? Is there a slash missing somehwere? What am I doing wrong or missing?
' Micro Market Volume\nMonth/Year \n2014-01-01 DALLAS-FT WORTH 63,974\n2014-02-01 DALLAS-FT WORTH 68,482\n2014-03-01 DALLAS-FT WORTH 85,866\n2014-04-01 DALLAS-FT WORTH 79,735\n2014-05-01 DALLAS-FT WORTH 75,339\n2014-06-01 DALLAS-FT WORTH 71,739\n2014-07-01 DALLAS-FT WORTH 85,893\n2014-08-01 DALLAS-FT WORTH 83,694\n2014-09-01 DALLAS-FT WORTH 87,567\n2014-10-01 DALLAS-FT WORTH 87,389\n2014-11-01 DALLAS-FT WORTH 68,340\n2014-12-01 DALLAS-FT WORTH 74,805\n2015-01-01 DALLAS-FT WORTH 68,568\n2015-02-01 DALLAS-FT WORTH 61,924\n2015-03-01 DALLAS-FT WORTH 56,885\n2015-04-01 DALLAS-FT WORTH 68,101\n2015-05-01 DALLAS-FT WORTH 52,806\n2015-06-01 DALLAS-FT WORTH 79,918\n2015-07-01 DALLAS-FT WORTH 92,134\n2015-08-01 DALLAS-FT WORTH 88,047\n2015-09-01 DALLAS-FT WORTH 91,377\n2015-10-01 DALLAS-FT WORTH 91,307\n2015-11-01 DALLAS-FT WORTH 65,415\n2015-12-01 DALLAS-FT WORTH 81,456\n2016-01-01 DALLAS-FT WORTH 82,820\n2016-02-01 DALLAS-FT WORTH 91,688\n2016-03-01 DALLAS-FT WORTH 81,495\n2016-04-01 DALLAS-FT WORTH 87,872\n2016-05-01 DALLAS-FT WORTH 82,031\n2016-06-01 DALLAS-FT WORTH 100,783\n2016-07-01 DALLAS-FT WORTH 99,285\n2016-08-01 DALLAS-FT WORTH 99,179\n2016-09-01 DALLAS-FT WORTH 93,939\n2016-10-01 DALLAS-FT WORTH 99,663\n2016-11-01 DALLAS-FT WORTH 86,751\n2016-12-01 DALLAS-FT WORTH 84,551\n2017-01-01 DALLAS-FT WORTH 81,890\n2017-02-01 DALLAS-FT WORTH 90,212\n2017-03-01 DALLAS-FT WORTH 97,798\n2017-04-01 DALLAS-FT WORTH 89,338\n2017-05-01 DALLAS-FT WORTH 96,891\n2017-06-01 DALLAS-FT WORTH 86,613\n2017-07-01 DALLAS-FT WORTH 80,354' | Google Datalab and Python Issue | 0 | 0 | 0 | 68 |
45,945,387 | 2017-08-29T17:48:00.000 | 12 | 0 | 1 | 0 | python,kivy,beeware | 45,948,867 | 1 | true | 0 | 1 | Toga achieves its gui by mapping the Toga api to native platform widgets on different systems. This means that the apps will automatically look and behave like other 'native' apps from that system. In contrast, Kivy uses opengl for drawing, using its own widget toolkit. This means that by default it looks and behaves exactly the same on all different platforms. You can customise it, but in practice it's very hard to get something that really acts just like another framework.
Both methods have advantages and disadvantages. Kivy is quite flexible and portable, since you can use opengl just about anywhere, and the harder part is probably compiling Kivy and Python itself. On the other hand, Toga's method is the only way to get something that really acts like a native app, and also possibly sidesteps some Kivy problems like relatively slow startup on Android. That said, I'm not sure if the need to wrap different widgets explicitly means it may be less flexible, compared to Kivy's drawing API that can achieve basically anything without special platform support. | 1 | 8 | 0 | I know that the only way to build for cross platform in Python is Kivy but I recently heard of the Beeware project and this tool called Toga. As much as I know its still in its early stage and a lot of people aren't familiar with it as well but there are a couple of basic tutorials on the website. It looks very promising but I don't know about its future and the issues I might face if I start working on it as it might have a lot of bugs as of now. I read on the docs that Toga lets you build Native cross platform apps, are Kivy apps not native? Are they like Hybrid apps, like the ones you build on Phonegap? Thanks | Difference between Kivy and Toga (Beeware project) for Cross platform in Python | 1.2 | 0 | 0 | 3,705 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.