Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
41,989,065
2017-02-01T20:24:00.000
1
0
1
0
python,regex
41,989,121
2
false
0
0
Wouldn't this get all groups of adjacent \r and \n characters regardless of order or amount? Edited per comments: [\r\n]+
1
1
0
Is there a way to match a line break independently of system? i.e. match both \n and \r\n. The only thing I can think of is \r?\n which just feels clunky. The reason I want to do this is if I need to match 2 in a row, \n\n no longer works and if I match \n, then the preceding \r will still exist and I would have to strip it off or it could lead to problems later
python regex: match any style line break
0.099668
0
0
6,833
41,989,455
2017-02-01T20:47:00.000
1
1
0
1
aerospike,python-3.6
41,991,031
3
false
0
0
I figured it out. I just needed to use pip3 instead of pip to install it to correct version of python (though I was only able to get it onto 3.5, not 3.6 for some reason).
1
1
0
I just followed the instructions on the site and installed aerospike (on linux mint). I'm able to import the aerospike python client module from python 2.7 but not from 3.6 (newly installed). I'm thinking that I need to add the directory to my "python path" perhaps??, but having difficulty understanding how this works. I want to be able to run aerospike and matplotlib in 3.6.
newbie installing aerospike client to both my python versions
0.066568
0
0
154
41,991,575
2017-02-01T23:13:00.000
0
0
0
1
python,vagrant,pycharm
41,991,612
1
true
0
0
Turns out I was only changing the project python interpreter configuration to point to my running Vagrant machine, however, the Run/Debug Configuration wasn't set to use this project interpreter, but rather a different Vagrant machine which was currently down. Fixed by editing the Run/Debug Configuration and changing the "Python interpreter" to "Project Default".
1
0
0
I get the following PyCharm error when running my python3 interpreter on Vagrant: Error running x.py Can't run remote python interpreter: The provider for this Vagrant-managed machine is reporting that it is not yet ready for SSH. Depending on your provider this can carry different meanings. Make sure your machine is created and running and try again. Additionally, check the output of vagrant status to verify that the machine is in the state that you expect. If you continue to get this error message, please view the documentation for the provider you're using. I have no problem running the code from the terminal. I only have a problem when running through my Run/Debug Configuration. Using PyCharm 2016.3.1 on Windows 10. PyCharm How do I run from my Run/Debug Configuration?
Running python3 using a Run/Debug config with Vagrant: "Error running x.py Can't run remote python interpreter"
1.2
0
0
691
41,992,016
2017-02-01T23:51:00.000
1
0
0
0
python,facebook-live-api
42,508,346
1
true
0
0
No, there is no such API at this time.
1
0
0
Is there a Facebook API call to retrieve all or a subset of the current live videos with the relative metadata, such as location, user who is streaming, time at stream? The implementation could be in Python.
How to access videos in Facebook livemap programmatically?
1.2
0
1
355
41,992,104
2017-02-02T00:00:00.000
0
0
1
1
python,linux
41,992,148
2
false
0
0
1) You should not modify the system's binaries yourself directly 2) If your $PATH variable doesn't contain /usr/local/bin, the naming of that secondary directory isn't really important. You can install / upgrade independently wherever you have installed your extra binaries. 3) For Python specifically, you could also just use conda / virtualenv invoked by your system's python to manage your versions & projects.
1
7
0
On Linux, specifically Debian Jessie, should I use /usr/bin/python or should I install another copy in /usr/local/bin? I understand that the former is the system version and that it can change when the operating system is updated. This would mean that I can update the version in the latter independently of the OS. As I am already using python 3, I don't see what significant practical difference that would make. Are there other reasons to use a local version? (I know there are ~42 SO questions about how to change between version, but I can't find any about why)
/usr/bin/python vs /usr/local/bin/python
0
0
0
6,843
41,993,499
2017-02-02T02:59:00.000
0
0
1
0
python,c++,sonarqube,sonarqube-scan
42,019,717
1
true
0
0
Yes, SonarC++ build-wrapper is generic and you can use it with distutils. Follow the documentation on sonarqube.com which provides examples on how to setup the analysis of C/C++ projects.
1
0
0
I have a python project with a significant amount of C/C++ code, and I use distutils to build the project. When using sonarqube, I received the following message: By using the build-wrapper-output.bypass=true property, you'll switch to an "at best" mode that could result in false-positives and false-negatives. (Note: message modified from the original for clarity) Is it possible to use the sonarqube build wrapper when compiling with distutils?
Sonarqube build wrapper for distutils
1.2
0
0
240
41,997,011
2017-02-02T08:04:00.000
1
0
1
0
python,anaconda
44,805,251
1
false
0
0
I had the exact same problem, and solved it using various answers from various places. Here is what I did: Open a terminal, then copy paste: conda update nb_conda nb_conda_kernels nb_anacondacloud For me it answered that nb_conda wasn't installed, so I added: conda install nb_conda From there, jupyter notebook launched properly but it ended up in no folder and stated a "servor error". I solved the problem by running it stating the folder where I wanted it to start: jupyter notebook C:\Users\YOURNAME\Documents\Python It is then possible to update directly your shortcut by changing the "target" and "opens in": Target : C:\ProgramData\Anaconda3\Scripts\jupyter.exe notebook Opens in : C:\Users\YOURNAME\Documents\Python I hope that it will work all right for you, let me know if you found a better solution...
1
0
0
I just downloaded and installed anaconda, and I opened the jupyther notebook from the "start" menu, it prompts a black window that looks like a command line window, but instead of opening my browser on the notebook "tree" page, it just closes the black command line window and nothing happens. I formatted my computer and downloaded the anaconda, before it was fine and now it doesn't launch. tried to unistall and install again but nothing... and ideas?
Python Anaconda Jupyther notebook doesn't launch
0.197375
0
0
663
41,999,094
2017-02-02T09:58:00.000
2
0
1
0
python,pymongo
41,999,635
2
false
0
0
use dateutil dateutil.parser.parse("2017-10-13T10:53:53.000Z") will return datetime.datetime(2017, 10, 13, 10, 53, 53, tzinfo=tzutc())
1
25
0
How to insert datetime string like this "2017-10-13T10:53:53.000Z" into mongo db as ISODate? I get a string in mongodb when I insert: datetime.strptime("2017-10-13T10:53:53.000Z", "%Y-%m-%dT%H:%M:%S.000Z")
How to insert datetime string into Mongodb as ISODate using pymongo
0.197375
1
0
43,049
42,000,361
2017-02-02T10:56:00.000
0
1
0
0
python,firefox,selenium-webdriver
42,038,226
1
false
0
0
This depends to some degree on the browser you're using. If you are using a current version of Chrome or Firefox, deleting all cookies before starting testing won't make sense, as every driver instance will start with a separate, temporary profile without any cookies anyway. Deleting all cookies after running the tests is equally unnecessary in those browsers, as the next test will start up with a clean profile again anyway. There only real scenario where deleting cookies makes sense is if you do multiple things in the same test (i.e., with the same driver instance) where you know that at some point the app sets a cookie that you don't want to be there at a later stage in the test. That's more of an edge case, though.
1
0
0
I need to use delete_all_cookies in my code. I have some concerns: Do I need to place before opening the url or before quit Can anyone clarify the same? A lot of cache has been created during my test runs. My main objective is to clear cache in test server machine.
At which point should I place delete_all_cookies
0
0
0
33
42,001,300
2017-02-02T11:40:00.000
2
0
1
0
python,bokeh
42,007,553
1
true
0
0
You have more than one python or python environment installed on your system, and the python that you are actually running is different from the one that your have installed bokeh into. As a concrete example, here is one possible scenario, similar to ones I have seen on occasion: Jupyter notebook installed in the OS/system python but not the conda root env Bokeh conda env but Jupyter notebook is now User runs notebook, and this uses the system python, which has a different bokeh version (or none at all)
1
2
0
I am trying to run some bokeh examples, and when I import bokeh.layouts (either from the ipython interpreter, or in a jupyter notebook) I get the following error: ImportError: No module named 'bokeh.layouts'. I am using python 3.5 and bokeh 0.12.4 installed via conda install bokeh. What's wrong with this?
ImportError: No module named 'bokeh.layouts'
1.2
0
0
6,112
42,003,456
2017-02-02T13:27:00.000
0
0
0
0
python,python-2.7,session,request,phantomjs
42,003,480
1
false
1
0
You don't need to keep sending the session, as long as you keep the Python application running you should be good.
1
1
0
So I am currently writing a script that will allow me to wait on a website that has queue page before I can access contents Essentially queue page is where they let people in randomly. In order to increase my chance of getting in faster , I am writing multi thread script and have each thread wait in line. First thing that came to my mind is would session.get() works in this case? If I send session get request every 10 seconds, would I stay hold my position in queue? Or would I end up at the end? Some info about website, they randomly let people in. I am not sure if refreshing page reset your chance or not. But best thing would be to leave page open and let it do it things. I could use phantomjs but I would rather not have over 100 headless browser open slowing down my program and computer
Does Python requests session keep page active?
0
0
1
194
42,003,461
2017-02-02T13:27:00.000
-2
0
0
0
python,e-commerce,bigcommerce
48,835,159
1
false
1
0
This will create the product on the BigCommerce website. You create the image after creating the product, by entering the following line. The image_file tag should be a fully qualified URL pointing to an image that is accessible to the BigCommerce website, being found either on another website or on your own webserver. api.ProductImages.create(parentid=custom.id, image_file='http://www.evenmore.co.uk/images/emgrab_s.jpg', description='My image description')
1
3
0
how do I upload an image (from the web) using Bigcommerce's Python API? I've got this so far: custom = api.Products.create(name='Test', type='physical', price=8.33, categories=[85], availability='available', weight=0) Thank you! I've tried almost everything!
Bigcommerce Python API, how do I create a product with an image?
-0.379949
0
1
438
42,006,246
2017-02-02T15:34:00.000
1
0
0
0
python,django,django-allauth
42,011,936
1
false
1
0
@pennersr was kind enough to answer this on the allauth github page: This truly all depends on how you model things, there is nothing in allauth that blocks you from implementing the above. One way of looking at things is that the signup form is not different at all. It merely contains an additional switch that indicates the type of user account that is to be created. Then, it is merely a matter of visualizing things properly, if you select type=employer, then show a different set of fields compared to signing up using type=developer. If you don't want such a switch in your form, then you can store the type of account being created somewhere in the session, and refer to that when populating the account.
1
0
0
Having read many stack overflow questions, tutorials etc on all-auth I keep getting the impression that it only supports the registration of one type of user per project. I have two usecases A business user authenticates and registers his business in one step. A developer user authenticates and just fills in the name of his employer (software company). I do not want the developer to see the business fields when he signs up. i.e his signup form is different. If, in fact signup should be common and the user specific details should be left to a redirect, how to accomplish this from social auth depending on user type?
Django All-Auth Role Based Signup
0.197375
0
0
376
42,006,353
2017-02-02T15:40:00.000
1
0
1
0
python,regex,python-2.7
42,006,491
1
true
0
0
^ is indicating the beginning of the string. So it is only searching for the first character being an open bracket. You can remove the ^
1
0
0
This may have been asked already but I can't seem to find any duplicates so I am going to ask myself and hope I don't get flagged. I am building a small parser using a state machine that switches into a sub-parser whenever a { is detected. I only want the very first occurrence of what my regex will match at the position I am currently at. Basically my regex is keymatch = re.compile(r'^\{([A-Za-z0-9])\}') and I am using the string this {is} a {test} string. The problem is I am getting no match when I run keymatch.match(myString, pos) I have also tried with keymatch.search(myString, pos) and found the same result. For the record, pos is pointing at the correct location, hard coded values also return none, while no position or leading ^ in my regex returns all the matches, which I do not want because of how I am rebuilding the string character by character. In addition, if a match is not found at pos, I want my match object to be none to trigger an error rather than just give me anything down the string it finds. Is anything noticeably wrong with my approach, and if so, what can I do to fix it?
python 2.7 regex search at pos where '^' starts at pos
1.2
0
0
43
42,006,500
2017-02-02T15:47:00.000
2
0
0
0
python,python-sphinx
48,073,039
1
false
1
0
inorder to install specific version of sphinx .type in terminal : pip install sphinx==1.4.9
1
0
0
The search functionality in my Sphinx doc has stopped working after upgrading to version 1.5.1. I'm using the sphinx_rtd_theme. How do I downgrade Sphinx to 1.4.9?
Downgrade Sphinx from version 1.5.2. to version 1.4.9
0.379949
0
0
1,461
42,006,758
2017-02-02T15:57:00.000
4
0
0
0
python,web-scraping
42,006,894
1
false
1
0
Apparently your IP was banned by the website for suspicious activity. There are couple ways around that: talk to website owners. This is the most straightforward and nicest way change your IP, e.g. by connecting though a pool of public proxies or Tor. This is a little bit dirty and it is not so robust, e.g. you can be banned by user-agent or some other properties of your scraper.
1
2
0
I made a simple scraper that accesses an album, and scrapes lyrics for each song from azlyrics.com. After about an hour of working, the website crashed, with an error: Chrome: www.azlyrics.com didn’t send any data. ERR_EMPTY_RESPONSE Tor, firefox, waterfox: The connection was reset The connection to the server was reset while the page was loading. It's the same for all devices on my home network. If I use mobile data to access it via my phone it works fine. I tried fixing it with ipconfig /release /renew, but it didn't work. I'm at a loss for what else I could do or why it even happened. Any help is greatly appreciated.
Website error after scraping
0.664037
0
1
337
42,006,791
2017-02-02T15:58:00.000
1
0
0
0
python,django,debugging
42,007,400
3
false
1
0
"Internal" is a relative term. People or machines on the internal network can still be considered attackers.
3
0
0
I'm working on a Django project that I took over for someone else that is only used internally. It's not deployed to a website and can only be accessed on a local network. The previous developer had left DEBUG = True in settings.py. Django docs really emphasize that leaving DEBUG=True when the site is in production is bad. The site is inaccessible by anyone not on the local network, and is only even looked at by ~5 people regularly. Aside from security reasons, is there any other downside to operating permanently in DEBUG mode?
Leave Debug on in Django for internal application
0.066568
0
0
162
42,006,791
2017-02-02T15:58:00.000
3
0
0
0
python,django,debugging
42,006,881
3
false
1
0
Debug mode might leak a bit of memory. Additionally, it is much better for production systems, however small, to email their administrator with the full error message and stack trace (which Django does by default when DEBUG=False) than to show it on the browser. This way the administrator knows exactly what happened instead of trying to reproduce it with vague information from the users ("I clicked here and then I think I clicked there and then there was this message"). You need to set the ADMINS and EMAIL_* settings correctly though.
3
0
0
I'm working on a Django project that I took over for someone else that is only used internally. It's not deployed to a website and can only be accessed on a local network. The previous developer had left DEBUG = True in settings.py. Django docs really emphasize that leaving DEBUG=True when the site is in production is bad. The site is inaccessible by anyone not on the local network, and is only even looked at by ~5 people regularly. Aside from security reasons, is there any other downside to operating permanently in DEBUG mode?
Leave Debug on in Django for internal application
0.197375
0
0
162
42,006,791
2017-02-02T15:58:00.000
1
0
0
0
python,django,debugging
42,006,896
3
false
1
0
The Django docs warn to never deploy with debug on: Never deploy a site into production with DEBUG turned on. Did you catch that? NEVER deploy a site into production with DEBUG turned on. Later, they give a reason that isn't related to security: It is also important to remember that when running with DEBUG turned on, Django will remember every SQL query it executes. This is useful when you’re debugging, but it’ll rapidly consume memory on a production server.
3
0
0
I'm working on a Django project that I took over for someone else that is only used internally. It's not deployed to a website and can only be accessed on a local network. The previous developer had left DEBUG = True in settings.py. Django docs really emphasize that leaving DEBUG=True when the site is in production is bad. The site is inaccessible by anyone not on the local network, and is only even looked at by ~5 people regularly. Aside from security reasons, is there any other downside to operating permanently in DEBUG mode?
Leave Debug on in Django for internal application
0.066568
0
0
162
42,007,071
2017-02-02T16:12:00.000
2
0
1
0
python,pip,global,packages,local
42,007,776
1
false
0
0
The stuff under /usr/lib is system packages considered part of the OS. It's likely/possible that OS scripts and services will have dependencies on these components. I'd recommend not touching these yourself, or really using or depending on them for user scripts either as this will make your app OS or even OS version dependent. Use these if writing scripts that run at system level such as doing maintenance or admin tasks, although I'd seriously consider even these using... Stuff under /usr/local/lib is installed locally for use by any user. System scripts and such won't depend on these (I don't know SuSE myself though), but other user's scripts might well do, so that needs to be borne in mind when making changes here. It's a shared resource. If your writing scripts that other users might need to run, develop against this to ensure they will have access to all required dependencies. Stuff in your home directory is all yours, so do as thou wilt. Use this if you're writing something just for yourself and especially if you might need the scripts to be portable to other boxes/OSes. There might well be other options that make sense, such as if you're part of a team developing application software, in which case install your team's base dev packages in a shared location but perhaps not /usr/local. In terms of using zypper or pip, I'd suggest using zypper to update /usr/lib for sure as it's the specific tool for OS configuration update. Probably same goes for /usr/local/lib too as that's really part of the 'system' but it's really up to you and which method might make most sense e.g. if you needed to replicate the config an another host. For stuff in your homedir it's up to you but if you decide to move to a new host on a new OS, pip will still be available and so that environment will be easier to recreate.
1
1
0
I'm confused about the possibilities of installing external python packages: install package local with pip into /home/chris/.local/lib/python3.4/site-packages $ pip install --user packagename install package global with pip into /usr/local/lib/python3.4/site-packages (superuser permission required) $ pip install packagename install package global with zypper into /usr/lib/python3.4/site-packages (superuser permission required) $ zypper install packagename I use OpenSuse with package-manager zypper and have access to user root. What I (think to) know about pip is that: - pip just downloads the latest version. - For installed packages won't be checked if newer versions are available. - Own packages can be installed in a virtual env. - Takes more time to download and install than zypper. - Local or global installation possible. The package-manager of my system: - Does download and installation faster. - Installs the package only globally. My question is when and why should I do the installation: pip (local, global) or with zypper? I've read a lot about this issue but could not answer this question clearly...
When install external python packages global, when local? pip or system package-manager?
0.379949
0
0
795
42,007,272
2017-02-02T16:23:00.000
0
0
0
1
python,windows,python-3.x,winapi,pywin32
50,964,423
4
false
0
0
If you have the rotate shortcut active in windows (CTRL+ALT+ARROW KEY) you can use pyautogui.hotkey function.
1
1
0
I am trying to write a python script to rotate screen in Windows. I have clues of doing it with Win32api. What are the other possibilities or commands to achieve so(Win32api included).
Screen rotation in windows with python
0
0
0
4,207
42,007,591
2017-02-02T16:38:00.000
1
0
0
0
python,keras,object-detection,training-data
42,181,934
1
true
0
0
Your task is a so-called binary classification. Make sure, that your final layer has got only one neuron (e.g. for Sequential model model.add(Dense(1, ... other parameters ... ))) and use the binary_crossentropy as loss function. Hope this helps.
1
1
1
I want to make a system that recognizes a single object using keras. In my case I will be detecting car wheels. How do I train my system just for 1 object? I did classification task before using cats and dogs, but now its a completely different task. Do I still "classify", with class 0= wheels, class = 1 non wheels(just random images of anything)? How do I do following steps in this problem? 1) Train system for 1 object 2) Detect object(sliding window or heatmap)
Single object detection keras
1.2
0
0
907
42,008,704
2017-02-02T17:33:00.000
0
1
0
0
intellij-idea,pycharm,python-unittest
42,026,672
1
false
0
0
This turned out to be an issue I caused myself. we had added a folder called 'UnitTest' and introducing this to the path caused issues with PyCharm knowing what was a true UnitTest file. I still don't know exactly what caused some files to work, but there appears to be one method in those files that did work that was probably being imported from another file that had the proper pathing.
1
0
0
I have upgraded my Community version of PyCharm to 2016.3.2, and I'm not positive it was this exact version, but when I went to run files that had unittests in them, only some of them are recognized as UnitTests that I can right click and run. I have looked to make sure that my classes implement unittest.TestCase class clWorkflowWebClientTest(unittest.TestCase): all of my tests begin with test_blahblah() If I go into Edit Configurations and add one manually, I can right click and run it from the project tree, and it runs as a UnitTest. But I don't get the "Run UnitTests in Blah' dialog when I right click the file.
Did something Change with Pycharm 2016.3.2 - UnitTests no longer auto discovered
0
0
0
37
42,008,720
2017-02-02T17:34:00.000
0
0
0
0
python,database,sqlite,csv
42,173,826
2
true
0
0
So after researching some off the shelf options I found that the Devart Excel Add Ins did exactly what I needed. They are paid add ins, however, they seem to support almost all modern databases including SQlite. Once the add in is installed you can connect to a database and manipulate the data returned just like normal in Excel including bulk edits and advanced filtering, all changes are highlighted and can easily be written to the database with one click. Overall I thought it was a pretty solid solution and everyone seems to be very happy with it as it made interacting with a database intuitive and non threatening to the more technically challenged.
1
0
0
I was wondering if there is a way to allow a user to export a SQLite database as a .csv file, make some changes to it in a program like Excel, then upload that .csv file back to the table it came from using a record UPDATE method. Currently I have a client that needed an inventory and pricing management system for their e-commerce store. I designed a database system and logic in Python 3 and SQLite. The system from a programming standpoint works flawlessly. The problem I have is that there are some less then technical office staff that need to edit things like product markup within the database. Currently, I have them setup with SQLite DB Browser, from there they can edit products one at a time and write the changes to the database. They can also export tables to a .csv file for data manipulation in Excel. The main issue is getting that .csv file back into the table it was exported from using an UPDATE method. When importing a .csv file to a table in SQLite DB Browser there is no way to perform an update import. It can only insert new rows by default and do to my table constraints that is a problem. I like SQLite DB Browser because it is clean and simple and does exactly what I need. However, as soon as you have to edit more then one thing at a time and filter information in more complicated ways it starts to lack the functionality needed. Is there a solution out there for SQLite DB Browser to tackle this problem? Is there a better software option all together to interact with a SQLite database that would give me that last bit of functionality?
User friendly SQLite database csv file import update solution
1.2
1
0
572
42,009,558
2017-02-02T18:20:00.000
0
1
0
1
python,ubuntu
42,010,021
1
false
0
0
It was the path of the .py file being called inside jumpbox.py. I was referencing it only as the filename, without the full path, since it was in the same directory. os.system("python <full path>.py") made it work perfectly. Thanks @Hannu
1
0
0
I have a python program I want to run when a specific user logs into my Ubuntu Server. Previously, I tried to do this via the command useradd -m -s /var/jumpbox/jumpbox.py jumpbox. This ran the program, but it didn't work the same way it did when I call it via ./jumpbox.py from the /var/jumpbox directory. The problem is, this is a curses menu and when an option is selected, another .py file is called to run. Using the useradd method to run jumpbox.py, my menu was the part the worked, but it never called my other .py files when an option was selected. What is the best way to go about running my /var/jumpbox/jumpbox.py file is run when the jumpbox user (and only this user) logs into the server?
Ubuntu Server Run Script at Login for Specific User
0
0
0
65
42,013,132
2017-02-02T22:01:00.000
0
0
1
0
python,scipy,debian
42,013,274
3
false
0
0
You may want to try with pip3 install scipy
1
5
0
I used sudo apt-get install python-scipy to install scipy. This put all the files in /usr/lib/python2.7.dist-packages/scipy. My best guess is it chose that location because python 2.7 was the default version of python. I also want to use scipy with python 3 however. Does the package need to be rebuilt for python 3 or can I just point python 3 to the existing version? I've tried using pip to install two parallel version, but I can't get the dependency libblas3 installed for my system. What's the best way to do this? I'm on Debian Jessie.
Install scipy for both python 2 and python 3
0
0
0
27,416
42,013,705
2017-02-02T22:42:00.000
4
0
1
0
python,geany
44,169,039
3
false
0
0
Start by creating a project file that resides in your venv folder. Then, point to the Python interpreter that resides in the venv folder using the build configuration feature. These actions will allow you to run the correct Python interpreter for each virtual environment you create and not affect the configuration of other virtual environments, other project configurations, or your base configuration. To set Geany up so that it runs Python code in the appropriate venv, follow these steps: 1) Verify Geany is set up for Project Sessions. To do this, go to the main menu, select 'Edit', then 'Preferences'. The Preferences window will appear. Select the General Tab, then select 'Miscellaneous' tab. Now look at 'Projects' section on the tab. Verify both 'Use project-based session files' and 'Store project file inside the project-based directory' are selected. 2) Create a Geany project file in your venv folder. To do this, go to the main menu, select 'Project', then select 'New'. Give the project a name and save it in your virtual environment folder. 3) Configure the build commands for the above project. To do this, go to the main menu, select 'Build', then select 'Set Build Commands'. A window will appear. Look for the 'Execute' button on the bottom left of the window. In the command box next to the 'Execute' button type in the complete path to the bin folder in your venv folder that contains the Python interpreter you wish to run, then add "%f" to the end of the command. For example, my virtual folder is in home/my_virtual_env_folder and I want to run the Python3.4 intrepreter in that folder, so I would type in: /home/virtual_env_folder/bin/python3.4 "%f" Click 'OK' and the changes you made will be saved. Now when you open the project you just created, the project file will automatically point to the correct Python interpreter for the venv you are working in.
2
5
0
I am just starting to set up virtual environments for my Python projects. Up to now I have used and like Geany for development and testing. When I set up my new virtual environment, what will I need to set in Geany to make sure it runs my Python code in the appropriate venv? Thanks!
using geany with python virtual environment
0.26052
0
0
5,273
42,013,705
2017-02-02T22:42:00.000
1
0
1
0
python,geany
61,304,941
3
false
0
0
I am using Windows 10 and conda virtual environments, which I first have to activate before use. I was able to use these conda environments in Geany 1.36 by doing the following: Go to menu: Edit - Preferences, in there go to Tools tab and in Terminal, type the following: cmd.exe /Q/C conda activate envname && %c Replace "envname" with the name of your conda virtual environment. && will also pass the argument %c to the execution line. %c will pass the command in execute command from Geany (step 2). Go to menu: Build - Set Build Commands, in there go to "Execute commands" section, and in Execute Command, type the following: python "%f" %f will pass the name of the file that you are executing from. In the end it's like you are executing the following (assuming your python file is "script.py"): cmd.exe /Q/C conda activate envname && python script.py This worked for me. Just a note, when I installed miniconda, I added it to the PATH variables in Windows 10. That is why I don't have to add the path where the activate.bat or python.exe are located, they are already declared in the PATH variable from Windows.
2
5
0
I am just starting to set up virtual environments for my Python projects. Up to now I have used and like Geany for development and testing. When I set up my new virtual environment, what will I need to set in Geany to make sure it runs my Python code in the appropriate venv? Thanks!
using geany with python virtual environment
0.066568
0
0
5,273
42,015,758
2017-02-03T02:28:00.000
0
0
1
0
python,python-3.x,python-idle
42,015,874
3
false
0
0
You shouldn't have to worry about a limit. As for a new line, you can use an if with \n or a for loop depending on what you're going for.
2
0
0
I'm a bit curious as I want to create a file containing a dictionary that would automatically update whenever an event occurs and need to know if there is a limit to the number of characters Python's IDE can hold on a single line. An alternative for me would be at least knowing a way to make the dictionary always start from a new line when it reached a certain length. But I'd still like to know if there is a limit just for knowing's sake.
Any limit to how many characters Python's IDE can take in one line?
0
0
0
1,765
42,015,758
2017-02-03T02:28:00.000
1
0
1
0
python,python-3.x,python-idle
42,016,708
3
true
0
0
As far as disk storage and ram memory are concerned, '\n is just another character. As far as tcl/tk and Python's tkinter wrapper are concerned, newlines are very important. Since tk's Text widget is intended to display text to humans, and since vertical scrolling is much more useful for this than horizontal scrolling, it is optimized for the former. A hundred 1000 char lines (100,000 chars total) bogs down vertical scrolling, whereas 300,000 50 char lines (15,000,000 chars total) is no problem. IDLE uses tkinter and the main windows are based on tk's Text widget. So if you to view text in IDLE, keep lines lengths sensible. I do not know about other IDEs that use other GUI frameworks. But even if they handle indefinitely long lines better, horizontal scrolling of a 100000 char line is pretty obnoxious.
2
0
0
I'm a bit curious as I want to create a file containing a dictionary that would automatically update whenever an event occurs and need to know if there is a limit to the number of characters Python's IDE can hold on a single line. An alternative for me would be at least knowing a way to make the dictionary always start from a new line when it reached a certain length. But I'd still like to know if there is a limit just for knowing's sake.
Any limit to how many characters Python's IDE can take in one line?
1.2
0
0
1,765
42,017,180
2017-02-03T05:09:00.000
0
0
0
0
openerp,python-unicode,point-of-sale
44,111,743
1
false
1
0
Try this one maybe it could help you to solve the problem: Just try to make changes in default addons of your odoo. Because there are some configuration is there related to font printing. There is one module name hw_escpos in that module you can find one function where font configuration is there so just add font what you want and try to print.
1
0
0
Any solutions for unsupported fonts to be able to print via POSBOX for odoo POS? POS BOX does not support Myanmar font. We need to print via POS BOX because we need multiple printings (to kitchen 1, to kitchen 2, to drink counter, etc ...). Any solutions for this issue, please?
Any solutions for unsupported fonts to be able to print via POSBOX for odoo POS?
0
0
0
542
42,018,636
2017-02-03T07:09:00.000
0
0
1
0
python,python-2.7,python-3.x,module,beautifulsoup
42,045,491
1
false
0
0
Look inside your pip3 script. Does the #!-line at the beginning actually contain python3.5? If not, change it.
1
0
0
I use both 2.7 and 3.5 and I want to keep both installed as we still use 2.7 in VFX and Gaming industry. The problem is when I attempt to install a module in 3.5 (for this example, I'll use "pip3 install beautifulsoup4"), the module installs in the 2.7 folder and not 3.5 folder. How do I get terminal to install the correct version of Python Module? For the record I would like the modules to install in this folder: /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages I've tried specifying the directory. I've heard running the "2to3 -w bs4" will convert the code to 3.5 for me but I am unfamiliar with this command so if anyone can correctly format it for me that would be greatly appreciated (I am still a newb so please be descriptive). Thank you!
Installing Python modules in version 3.5.2 with both Python 2.7 and 3.5 already installed on Mac
0
0
0
963
42,019,279
2017-02-03T07:51:00.000
0
0
0
0
python,python-2.7,resume-upload
68,074,868
2
false
0
0
Excellent answer by GuySoft - it helped me a lot. I have had to slightly modify it as I never (so far) encountered the three exceptions his script is catching, but I experienced a lot of ConnectionResetError and socket.timeout errors on FTP uploads, so I added that. I also noticed that if I added a timeout of 60 seconds at ftp login, number of ConnectionResetErrors dropped significantly (but not all together). It was often happening that upload often got stuck at 100 % at the ftp.storbinary until socket.timeout, then tried 49 times and quit. I fixed that by comparing totalSize and rest_pos and exited when equal. So I have working solution now, bu I will try to figure out what is causing the socket timeouts. Interesting thing is that when I used Filezilla and even a PHP script, the file uploads to same FTP server were working without glitch.
1
3
0
I am trying to upload the file in python and i want to upload the file in resumable mode i.e when the internet connection resume , the file upload resume from the previous stage. Is there any specific protocol that supports resumable file upload. Thanks in advance
Resumable file upload in python
0
0
1
2,044
42,019,810
2017-02-03T08:27:00.000
1
0
0
0
python,sqlalchemy
42,028,145
1
false
1
0
If you're using a database session, you can simply specify the columns directly. session.query(User.email, User.name).filter(and_(User.id == id, User.status == 1)).first()
1
3
0
I'm using query like this: user = User.query.options(load_only("email", "name")).filter(and_(User.id == id, User.status == 1)).first() I want to get only email and name column as an User object. But it returns all columns. I can't find any solutions. Can anybody help? Thanks
SQLAlchemy ORM Load Cols Only not working
0.197375
1
0
1,278
42,023,939
2017-02-03T12:07:00.000
0
1
0
0
python,security,encryption,cryptography
42,026,136
2
false
1
0
Do the encryption and decryption on the second server (encryption server). Pass the password to the encryption server along with an id for encryption and it returns the encrypted password to store in the DB. When the password is needed pass the encrypted password to the encryption server for decryption. Have the encryption server monitor request activity, if an unusual number of requests are received sound an alarm and in extreme cases stop processing requests. Make the second server very secure. No Internet access, minimal access accounts, 2-factor authentication. The encryption server becomes a poor-man's HSM (Hardware Encryption Module).
1
0
0
I'm currently building a website where the users would enter their credentials for another web service that I'm going to scrape to get their data. I want to make sure that when I save their credentials in my database, I'm using the best encryption possible and the best architecture to ensure the highest level of security. The first idea that I had in mind was to encrypt the data using an RSA pub key (PBKDF2, PKCS1_OAEP, AES 256bit... ???) and then allowing my scrapping script to use the private key to decrypt the credentials and use them. But if my server is hacked, the hacker would have access to both the database and the private key, since it will be kept on my server that runs the scrapping script and hosts the DB. Is there an architecture pattern that solves this ? I've read that that there should be a mix of hashing and encryption to enable maximum security but hashing is uni directional and it doesn't fit my use case since I will have to reuse the credentials. If you can advise me with the best encryption cypher/pattern you know it could be awesome. I'm coding in python and I believe PyCrypto is the go-to library for encryption. (Sorry I have very little knowledge about cryptography so I might be confusing technologies)
Encrypting credentials and reusing them securely
0
0
0
207
42,026,072
2017-02-03T14:04:00.000
-1
0
0
0
python,apache-spark,pyspark
42,027,081
1
false
0
0
include the package anyway to be sure eg via spak submit: $SPARK_HOME/bin/spark-shell --packages graphframes:graphframes:0.1.0-spark1.6
1
1
1
I have been at it for some time and tried everything. I need to find out whether the package GraphFrames is included in the spark installation at my office cluster. I am using Spark version 1.5.0. Is there a way to list all the installed packages in Spark?
Finding out installed packages in Spark
-0.197375
0
0
1,593
42,029,159
2017-02-03T16:51:00.000
0
0
0
0
python,algorithm,graph
42,029,561
1
false
0
0
Instead of using floating points for weights, use tuples (weight, number_of_edges) with pairwise addition. The lowest weight path using these new weights will have the lowest weight, and in the case of a tie, be the shortest path. To define these weights I would make them a subclass of tuple with __add__ redefined. Then you should be able to use your existing code.
1
1
1
I am using a networkx weighted graph in order to model a transportation network. I am attempting to find the shortest path in terms of the sum of weighted edges. I have used Dijkstra path in order to find this path. My problem occurs when there is a tie in terms of weighted edges. When this occurs I would always like to choose from the set of paths that tied, the path that has the least number of edges. Dijkstra path does not seem to be doing this. Is there a way to ensure that I can choose the path with the least number of edges from a set of paths that are tied in terms of sum of weighted edges?
Python: Shortest Weighted Path and Least Number of Edges
0
0
1
711
42,030,293
2017-02-03T17:59:00.000
0
0
0
0
python,django,heroku,garbage-collection,celery
42,077,350
1
false
1
0
Looks like the problem is that I'm not using .iterator() to iterate over the main queryset. Even though I'm freeing the data structures I'm creating after each iteration, the actual query results are all cached. Unfortunately, I can't use .iterator(), because I use prefetch_related extensively. I need some kind of hybrid method. I think it will involve processing the top-level queryset in batches. It won't completely have the advantage of a finite number of queries that prefetch_related has, but it will be better than one query per model object.
1
2
0
I have a background task running under Celery on Heroku, that is getting "Error R14 (Memory quota exceeded)" frequently and "Error R15 (Memory quota vastly exceeded)" occasionally. I am loading a lot of stuff from the database (via Django on Postgres), but it should be loading up a big object, processing it, then disposing of the reference and loading up the next big object. My question is, does the garbage collector know to run before hitting Heroku's memory limit? Should I manually run the gc? Another thing is that my task sometimes fails, and then Celery automatically retries it, and it succeeds. It should be deterministic. I wonder if something is hanging around in memory after the task is done, and still takes up space when the next task starts. Restarting the worker process clears the memory and lets it succeed. Maybe Django or the DB has some caches that are not cleared? I'm using standard-2x size. I could go to performance-m or performance-l, but trying to avoid that as it would cost more money.
Does Python garbage collect when Heroku warns about memory quota vastly exceeded (R15)?
0
0
0
609
42,033,360
2017-02-03T21:35:00.000
1
0
1
1
python,linux,chmod
42,035,060
2
true
0
0
Short answer is No. When you make chmod +x mypackage you are doing nothing because mypackage is a directory and directories already has execute flag (or you will be unable to list their files). If you type: ls -l you will see. Your options to run directly the whole package without installing it is the way you already mention: python -m mypackage, or make a shell script which will do that for you. I see that your intentions are to execute just ./something and your application to start working without specifying python in front and also this to not be globally installed. The easyest way will be to put a shell script that will launch your package.
1
5
0
I want to make a python package executable from the command line. I know you can do chmod +x myfile.py where myfile.py starts with #!/usr/bin/env to make a single file executable using ./myfile.py. I also know you can do python -m mypackage to run a package including a __main__.py. However, if I add the shebang line to the __main__.py of a package, run chmod +x mypackage, and try ./mypackage, I get the error -bash: ./mypackage: Is a directory. Is it possible to run a package like this? (To be clear, I'm not looking for something like py2exe to make it a standalone executable. I'm still expecting it to be interpreted, I just want to make the launch simpler)
How to make a package executable from the command line?
1.2
0
0
4,192
42,033,430
2017-02-03T21:40:00.000
1
0
0
0
python,python-3.x,upload,nas,synology
42,033,609
1
true
0
0
The point of network drives is that they are used like local drives. So make it accessible to your operating system (mount on Unix/Linux/MacOS, share on Windows...) and copy the file to it. Alternatively, you can use a network protocol, such as webdav, sftp, whatever is enabled. python supports them all (sometimes with some support from the OS)
1
2
0
I want to make a file and upload that on a Synology NAS. I am using Python. It doesn't support FTP but it is just a network drive. I know the question is really short, I just don't know what more to tell.
How to upload file on Synology NAS?
1.2
0
1
1,321
42,035,082
2017-02-04T00:21:00.000
1
1
0
0
python,amazon-web-services,aws-lambda,amazon-kinesis-firehose
42,035,255
1
true
1
0
Preload the data into a Redis server. This is exactly what Redis is good at.
1
2
0
I have a python-based lambda function which triggers on s3 put operations based on a kinesis firehose stream which sends data at the rate of around 10k records per minute. Right now the lambda function just performs some small fixups of the data and delivers it to a logstash instance in batches of 100. The lambda execution time is 5-12 secs which is fine as it runs every minute. We're looking at enriching the streamed data with some more info before sending it to logstash. Each message coming in has an "id" field, and we'd like to lookup that id against a db of some sort, grab some extra info from the db and inject that into the object before passing it on. Problem is, I cannot make it go fast enough. I tried loading all the data (600k records) into DynamoDB, and perform lookups on each record loop in the lambda function. This slows down the execution way too much. Then I figured we don't have to lookup the same id twice, so i'm using a list obj to hold already "looked-up" data - this brought the execution time down somewhat, but still not nearly close to what we'd like. Then I thought about preloading the entire DB dataset. I tested this - simply dumping all 600 records from dynamodb into a "cache list" object before starting to loop thru each record from the s3 object. The data dumps in about one minute, but the cache list is now so large that each lookup against it takes 5 secs (way slower than hitting the db). I'm at a loss on what do do here - I totally realize that lambda might not be the right platform for this and we'll probably move to some other product if we can't make it work, but first I thought I'd see if the community had some pointers as to how to speed up this thing.
Fast data access for AWS Lambda function
1.2
0
0
623
42,036,307
2017-02-04T03:58:00.000
1
1
0
0
python,node.js,web-applications,server
42,036,454
1
true
1
0
You don't need to create your own encrypted communication protocol. Just serve all traffic over https. If you also wish to encrypt the data before storing it on a database you can encrypt it on arrival to the server. Check out Express.js for the server, Passport.js for authentication and search for 256-bit encryption on npm. There are quite a few implementations.
1
1
0
I'm getting started out creating a website where users can store and get (on user request) private information they store on the server. Since the information is private, I would also like to provide 256 bit encryption. So, how should I go about it? Should I code the back end server stuff in node.js or Python, since I'm comfortable with both languages? How do I go about providing a secure server to the user? And if in the future, I would like to expand my service to mobile apps for Android and iOS, what would be the process? Please try explaining in detail since that would be a great help :)
Build a Server to Receive and Send User's Private Information
1.2
0
1
30
42,036,376
2017-02-04T04:10:00.000
1
0
0
0
python-2.7,flask,google-analytics
42,036,453
1
true
0
0
If you add your tracking code only on the one web page you wish to track, then you should be able to accomplish your goal. Just to clarify, if you have two web pages, trackme.html and donottrackme.html, you would place the Google Analytics tracking code only on trackme.html. IP, device information, user agent, etc. should be visible within your dashboard.
1
0
0
I have requirement where I need to store users ip, device information, user_agent, etc. information for on url on my site. How do I go about this? This data will be used later as stats (which device hitting more, which locations etc.) I can see that Google analytics helps in tracking for entire site. How do I enable it to track only for one specific url on my site and track all information mentioned above?
Track users using Google analytics for one url on website
1.2
0
1
227
42,037,554
2017-02-04T07:03:00.000
1
0
0
0
python,selenium
42,040,656
1
false
0
0
Your problem is most likely the compatibility between Firefox and your GeckoDriver. Try using the latest Firefox and geckodriver. If you have a problem with Firefox, try reinstalling it and disable automatic updating.
1
0
0
I am new to windows and this is the first time I am running a Python program on windows. I am running a crawler program that uses selenium and firefox webdriver. My program runs successfully on mac/ubuntu, but on windows webdriver.Firefox() open a new geckodriver window(cmd like window) and just hangs there nothing after that. Program doesn't move forward after that. Windows 7 geckodriverv0.13
Windows: Selenium webdriver.Firefox hangs
0.197375
0
1
944
42,039,231
2017-02-04T10:24:00.000
0
0
1
1
python,powershell,exit
58,658,397
8
false
0
0
In my case, I found out that right ctrl + c does the trick in anaconda3 powershell - so no remapping necessary - I'm on Windows 10.
2
14
0
Python fails to quit when using Ctrl-C in Powershell/Command Prompt, and instead gives out a "KeyboardInterrupt" string. Recently I've reinstalled Windows 10. Before the reinstall Ctrl-C quit python (3.5/2.7) fine, with no output. Does anyone know why this has started happening? Whether it's just a simple setting? The only difference I can think of is I'm now on python 3.6. Ctrl-D works in Bash on Ubuntu on Windows, and Ctrl-C works fine in an activated anaconda python2 environment for quitting python.
Ctrl-C for quitting Python in Powershell now not working
0
0
0
30,551
42,039,231
2017-02-04T10:24:00.000
0
0
1
1
python,powershell,exit
54,466,340
8
false
0
0
Hitting Esc button on the upper corner of the keyboard seems to work for me on Windows-7, inside Spyder with numpy running, for Python 3.+ It broke the infinite ...: on an erroneous syntax in the interactive script
2
14
0
Python fails to quit when using Ctrl-C in Powershell/Command Prompt, and instead gives out a "KeyboardInterrupt" string. Recently I've reinstalled Windows 10. Before the reinstall Ctrl-C quit python (3.5/2.7) fine, with no output. Does anyone know why this has started happening? Whether it's just a simple setting? The only difference I can think of is I'm now on python 3.6. Ctrl-D works in Bash on Ubuntu on Windows, and Ctrl-C works fine in an activated anaconda python2 environment for quitting python.
Ctrl-C for quitting Python in Powershell now not working
0
0
0
30,551
42,039,868
2017-02-04T11:33:00.000
9
0
1
0
node.js,ipython,read-eval-print-loop,ijavascript
57,401,854
4
false
1
0
I've been looking for "ipython for node" for years and here's how I would answer your question: No.
1
31
0
Is there any kid of "repl + extra features" (like showing docs, module autoreload etc.), like iPython, but for Nodejs? And I mean something that runs locally & offline. This is a must. And preferably to work both in terminal mode and have an optional nicer GUI on top (like iPython + iPythonQT/Jupyter-qtconsole). The standard Nodejs repl is usable, but it has horrible usability (clicking the up-arrow cycles through the repl hisoty by line instead of by multi-line command, as you would expect any sane repl to work for interactively experimenting with things like class statements), and is very bare-bones. Every time I switch from iPython to it it's painful. A browser's repl like Chrome's that you can run for node too by starting a node-inspector debug session is more usable... but also too cumbersome.
Is there a REPL like iPython for Nodejs?
1
0
1
7,289
42,040,813
2017-02-04T13:15:00.000
-7
0
1
0
python,arrays,python-3.x,sorting
42,040,862
2
false
0
0
A list is a data structure that has characteristics which make it easy to do some things. An array is a very well understood standard data structure and isn't optimized for sorting. An array is basically a standard way of storing the product of sets of data. There hasn't ever been a notion of sorting it.
1
1
1
Why doesn’t the array class have a .sort()? I don't know how to sort an array directly. The class array.array is a packed list which looks like a C array. I want to use it because only numbers are needed in my case, but I need to be able to sort it. Is there some way to do that efficiently?
Why doesn’t 'array' have an in-place sort like list does?
-1
0
0
144
42,041,151
2017-02-04T13:50:00.000
1
0
0
0
python,numpy,fft,ifft
42,046,236
1
true
0
0
If you have amplitude and phase vectors for a spectrum, you can convert them to a complex (IQ or Re,Im) vector by multiplying the cosine and sine of each phase value by its associated amplitude value (for each FFT bin with a non-zero amplitude, or vector-wise).
1
0
1
How to compute irfft if I have only amplitude and phase spectrum of signal? In numpy docs I've found only irfft which use fourier coefficients for this transformation.
numpy irfft by amplitude and phase spectrum
1.2
0
0
411
42,043,418
2017-02-04T17:37:00.000
0
0
0
0
android,python,apache2,cgi
42,043,536
1
true
1
0
Hooray for worrying about security! Yes. There are always security holes. Use HTTPS rather than HTTP... Everywhere... (get free certificate from letsencrypt.com) Submitting data should generally use POST, not GET. (POST and HTTPS means the data is encrypted during transport. GET requests data via the URL which itself isn't encrypted. Mobile vs. Desktop isn't an issue Json vs. whatever isn't an issue Django vs. Python CGI isn't really an issue: if not properly configured either can have security issues.
1
0
0
1.Hi I have Python CGI script on Apache2 Server. 2.I want send data from apache2. Format is Json 3.Send data to mobile aplication. 4.Mobile aplication request data usefull HTTP Request Methods: GET. 5.The application uses HTTPURLCONECTION. But People ask, This is security hole. Is it realy security hole ?? Solution could be Django on Apache 2 ? or Solution could be SSL?
Apache2 uses Python CGI script, it is Security hole?
1.2
0
0
102
42,044,315
2017-02-04T18:58:00.000
-1
0
0
0
python,selenium,webdriver,selenium-chromedriver,ui-automation
42,044,535
3
false
0
0
This should be as simple as not calling driver.quit() at the end of your test case. You should be left with the chrome window in an opened state.
1
3
0
I am trying to keep the chrome browser open after selenium finishes executing my test script. I want to re-use the same window for my second script to run.
How to keep chrome browser window open to be re-used after selenium script finishes on python
-0.066568
0
1
9,560
42,044,560
2017-02-04T19:19:00.000
0
1
0
1
shell,python-3.x,ftp,ibm-midrange
42,054,160
3
false
0
0
The CL command RUNRMTCMD can be used to invoke a command on a PC running a rexec() client. iSeries Access for Windows offers such a client, and there are others available. With the iSeries client, the output of the PC command is placed in a spool file on the AS/400, which should contain the results of the FTP session. You can copy the spool file to a file using the CPYSPLF command and SNDDST it to yourself, but I am not sure the contents will be converted from EBCDIC to ASCII. Check out Easy400.net for the MMAIL programs developed by Giovanni Perotti. This package includes an EMAILSPL command to email a spool file. I believe you will need to pay $50 for the download. I think you are on the right track, but the are a lot of details.
1
0
0
I am looking for an approach / design through which i want to automate the process of FTP from windows location to IFS location present on AS400 environment when ever there is a new file added to windows path. Below is the approach I thought, Please refine it if needed. We have an option WRKJOBSCDEthrough which we can run a CL program in a scheduled threshold of 1hr. To write a CL program which invokes a script(pyton/shell) to talk to windows location(say X:drive having its IP as xx.xxx.xx.xx). Shell script has to search for latest file in the location X:drive and FTP that jar(of size 5mb max) to IFS location(say /usr/dta/ydrive) on AS400 machine. Thus, CL program we invoked in STEP2 has to mail to me using SNDDSTthe list of all the jars ftp'd by the scheduler job that runs every 1 hr in STEP1. All I am new to CL programming/RPGLE . Please help me with some learning stuff and also design of such concepts.
FTP Jar file from share path on windows to IFS location in AS400?
0
0
0
500
42,045,382
2017-02-04T20:43:00.000
1
0
1
0
python,pip,virtualenv
42,045,556
2
false
0
0
In my experience the best way to manage Python projects across multiple computers is NOT to try to distribute pip packages or virtualenv installations along with your program because that can lead to all sorts of problems. In fact, I'm not even sure that what you're trying to do is possible. Instead I would recommend the following: Exclude your virtualenv installation from your git repo by adding env to your .gitignore file. Run pip freeze > requirements.txt to write all required packages to requirements.txt. On any other computers you need to run the program on, run pip install -r requirements.txt to install the required packages. This approach, besides being quite straightforward, also gives you (and anyone else who may want to run your program) the flexibility to set up their local Python environment however they want to.
1
0
0
I'm a bit confused with what is happening, but I may be just misunderstanding how virtualenv is meant to work. First, I discovered I was getting errors because the path to my git folder had spaces in it. After removing spaces from the path, I created a fresh virtualenv, and then when activated pip list started working properly - showing what was installed into the site-packages dir. Note, I did not create the venv with --no-site-packages, and I did not create a requirements.txt with pip freeze. Here's where the confusion starts... At home, I git pull to sync up, and I see the new venv folder, but: Activating the venv and using pip list does not show the packages that were installed at work/into the repo. Example, the PyQt folder is less than half the size it was at work. Note QT itself was installed at work but not at home (standalone installation obviously, not pip). Another example is openpyxl. Folder is there, but not mentioned in pip list. Does pip freeze exist because getting things setup on a separate computer means you need to globally install what is listed in requirements.txt (if I had created one)? I thought the venv would contain everything and packages don't need to be installed since they are already in the folder. I know its mentioned in virtualenv docs to gitignore the env, but I don't see why. And I've heard its easier to have it in the repo. Unless of course this is a no-no, hence my troubles. I would appreciate some guidance understanding how pip, venv, and git are best used together for using multiple computers (and of course multiple people). You would think Googling would solve it, but so far these specifics have eluded me. Thanks
Understanding pip, virtualenv and packages
0.099668
0
0
143
42,046,184
2017-02-04T22:07:00.000
3
0
0
0
python,numpy,logarithm
42,046,234
3
false
0
0
You can use ** for exponentiation: np.log(x/y) ** 2
1
0
1
I am trying to define ln2(x/y) in Python, within NumPy. I can define ln(x) as np.log(x) but how I can define ln2(x/y)? ln2(x/y); natural logarithm to the power of 2
Define logarithmic power for NumPy
0.197375
0
0
235
42,046,527
2017-02-04T22:48:00.000
1
0
1
0
python,nested,nested-lists
42,046,548
2
false
0
0
If you want to insert all values in b into a specifix index in a: Just do : a[1] = b
1
0
0
Python: Learning the basics here but I have 2 list and am trying to REPLACE the values of b into a specific index of a. I've tried doing a.insert(1, b), but that shifts the values to the side to insert the list.
Trying to replace a value within a list (via index) with another list in one line of code?
0.099668
0
0
81
42,048,725
2017-02-05T05:01:00.000
1
0
0
0
python,r,nlp,preprocessor,text-classification
42,048,793
2
false
0
0
Manual annotation is a good option since you have a very good idea of an ideal document corresponding to your label. However, with the large dataset size, I would recommend that you fit an LDA to the documents and look at the topics generated, this will give you a good idea of labels that you can use for text classification. You can also use LDA for text classification eventually by finding out representative documents for your labels and then finding the closest documents to that document by a similarity metric(say cosine). Alternatively, once you have an idea of labels, you can also assign them without any manual intervention using LDA, but then you will get restricted to unsupervised learning. Hope this helps! P.S. - Be sure to remove all the stopwords and use a stemmer to club together words of similar king example(managing,manage,management) at the pre-processing stage.
2
0
1
I have a data set of 1M+ observations of customer interactions with a call center. The text is free text written by the representative taking the call. The text is not well formatted nor is it close to being grammatically correct (a lot of short hand). None of the free text has a label on the data as I do not know what labels to provide. Given the size of the data, would a random sample of the data (to give a high level of confidence) be reasonable first step in determining what labels to create? Is it possible not to have to manually label 400+ random observations from the data, or is there no other method to pre-process the data in order to determine the a good set of labels to use for classification? Appreciate any help on the issue.
Text Classification - Label Pre Process
0.099668
0
0
501
42,048,725
2017-02-05T05:01:00.000
1
0
0
0
python,r,nlp,preprocessor,text-classification
42,063,332
2
true
0
0
Text Pre-Processing: Convert all text to lower case, tokenize into unigrams, remove all stop words, use stemmer to normalize a token to it's base word. There are 2 approaches I can think of for classifying the documents a.k.a. the free text you spoke about. Each free text is a document: 1) Supervised classification Take some time and randomly pick few samples of documents and assign them a category. Do this until you have multiple documents per category and all categories that you want to predict are covered. Next, create a Tf-Idf matrix from this text. Select the top K features (tune value of K to get best results). Alternatively, you can use SVD to reduce the number of features by combining correlated features into one. Please bare in mind that you can use other features like the department of the customer service executive and many others also as predictors. Now train a machine learning model and test it out. 2) Unsupervised learning: If you know how many categories you have in your output variable, you can use that number as the number of clusters you want to create. Use the Tf-Idf vector from above technique and create k clusters. Randomly pick a few documents from each cluster and decide which category the documents belong to. Supposing you picked 5 documents and noticed that they belong to the category "Wanting Refund". Label all documents in this cluster to "Wanting Refund". Do this for all the remaining clusters. The advantage of unsupervised learning is that it saves you the pain of pre-classification and data preparation, but beware of unsupervised learning. The accuracy might not be as good as supervised learning. The 2 method explained are an abstract overview of what can be done. Now that you have an idea, read up more on the topics and use a tool like rapidminer to achieve your task much faster.
2
0
1
I have a data set of 1M+ observations of customer interactions with a call center. The text is free text written by the representative taking the call. The text is not well formatted nor is it close to being grammatically correct (a lot of short hand). None of the free text has a label on the data as I do not know what labels to provide. Given the size of the data, would a random sample of the data (to give a high level of confidence) be reasonable first step in determining what labels to create? Is it possible not to have to manually label 400+ random observations from the data, or is there no other method to pre-process the data in order to determine the a good set of labels to use for classification? Appreciate any help on the issue.
Text Classification - Label Pre Process
1.2
0
0
501
42,050,671
2017-02-05T09:55:00.000
8
0
1
0
python,python-2.7
42,050,682
1
true
0
0
str.join() joins the elements of the iterable in the order of iteration, so it depends on the thing you are joining. A list object is an ordered sequence, so yes, the order is preserved.
1
1
0
I want to convert a list into a string using join(). I have to keep the order of the list elements in the string. Can I be sure that Python never changes the order during conversion? I use Python 2.7.x
Does Python '.join' keeps the input data order?
1.2
0
0
135
42,053,240
2017-02-05T14:31:00.000
0
0
1
0
python-3.x,crash,python-idle
42,232,624
1
true
0
0
In the end, I just retyped my code. Luckily, I'd done a backup the previous night, so didn't lose too much. I am now making sure to do daily backups.
1
1
0
I was writing my code, then I pressed Ctrl+S. It then started not responding. I closed it and came back on to find the file was now empty! Anyone know how I can retrieve it?
Python IDLE crashed when saving and all my code disappeared
1.2
0
0
431
42,053,855
2017-02-05T15:35:00.000
0
0
0
1
python,django
42,053,909
1
true
1
0
Well, you simply need to find a way for the two of them to communicate without opening huge security holes. My suggestion would be a message queue (rabbit MQ, amazon SQS). The aws application writes jobs to the message queue and a local script runs the worker which is waiting for messages to be written to the queue for it to pick up.
1
0
0
I have a requirement to run a local python script which takes arguements and will run on local windows computer from python code hosted on aws.
How to run a local python script from django project hosted on aws?
1.2
0
0
59
42,057,667
2017-02-05T21:42:00.000
1
0
0
0
python,tensorflow
42,057,766
1
false
0
0
Not sure exactly what you are asking. I will answer about what I understood. In case you want to predict only one class for example digit 5 and rest of the digits. Then first you need to label your vectors in such a way that maybe label all those vectors as 'one' which has ground truth 5 and 'zero' to those vectors whose ground truth is not 5. Then design your network with only two nodes in output, where first node will show the probability that the input vector belongs to class 'one' (or digit 5) and second node will show the probability of belonging to class 'zero'. Then just train your network. To find accuracy, you can simple techniques like just count how many times you predict right i.e. if probability is higher than 0.5 for right class classify it as that class. I hope that helps, if not maybe it would be better if you could explain your question more precisely.
1
1
1
I want to find the accuracy of one class in MMNIST dataset .So how can i split it on the basis of classes?
How can we use MNIST dataset one class as an input using tensorflow?
0.197375
0
0
567
42,058,677
2017-02-05T23:51:00.000
0
0
1
0
python,pip,virtualenv
42,059,013
2
false
0
0
You can use the history command to view the history of all your commands and then grep for pip with out put to a file. Similar to the comment above.
1
1
0
I'd like to keep a record of all pip commands that were executed in a given virtual environment and of the package versions that got installed/updated/removed. Is there an easy way to do that? Alternatively, how do I get requirements.txt (including --install-option, etc.) out of my virtual environment state, if that's possible? Presumably, only the immediate dependencies need to be there.
log all pip commands in a given virtual environment?
0
0
0
337
42,059,103
2017-02-06T01:03:00.000
0
0
0
0
python,tensorflow,one-hot-encoding
59,105,698
1
false
0
0
while preparing the data you can use numpy to set all the data points in class 5 as 1 and the others will be set to as 0 using . arr = np.where(arr!=5,arr,0) arr = np.where(arr=5,arr,1) and then you can create a binary classifier using Tensorflow to classifiy them while using a binary_crossentropy loss to optimize the classifier
1
2
1
In case you want to predict only one class. Then first you need to label your vectors in such a way that maybe label all those vectors as 'one' which has ground truth 5 and 'zero' to those vectors whose ground truth is not 5. how can I implement this in tensorflow using puthon
how to predict only one class in tensorflow
0
0
0
164
42,059,381
2017-02-06T01:53:00.000
5
0
0
0
python,django
42,257,514
1
true
1
0
I've worked on this since I posted it, and the real answer is what I've synthesized from multiple sources (including other stack exchange posts). So... Everything changed in Django before I started using it. After 1.7, the 'migrations' bit was internalized and posts including the word "South" are about how the world was before 1.7. Further, the complication in my case dealt with the issue of migrations in that the project was already active and had real data in production. There were some posts including a GITHub chunk of code that talked about migrating tables from one App to another App. This is inherently part of the process, but several posts noted that to do this as a "migration" you needed the Migration.py to be in another App. Maybe even an App created for the purpose. In-the-end, I decided to approach the problem by changing the label in the Application class of apps.py in the application in question. In my case, I am changing "pages" to "phpages" but the directory name of my app is still pages. This works for me because the mezzanine app's "pages" sub-App is back in the python library and not a conflict in the filesystem. If this is not your situation, you can solve it with another use of label. So... Step-by-step, my procedure to rename pages to phpages. Create apps.py in the pages sub-directory. In it put: class PagesConfig(AppConfig): name = "pages" label = "phpages" verbose_name = "Purple Hat Pages" Key among these is label which is going to change things. In __init__.py in the pages sub-directory, put default_app_config = "pages.apps.PagesConfig" In your settings.py change the INSTALLED_APPS entry for your app to 'pages.apps.PagesConfig', ... All of your migrations need to be edited in this step. In the dependencies list, you'll need to change 'pages' to 'phpages'. In the ForeignKeys you'll need to also change 'pages.Something' to 'phpages.Something' for every something in every migration file. Find these under pages/mitrations/nnnn_*.py If you refer to foreign keys in other modules by from pages.models import Something and then use ForeignKey(Something), you're good for this stop. If you use ForeignKey('pages.Something') then you need to change those references to ForeignKey('phpages.Something'). I would assume other like-references are the same. For the next 4 steps (7, 8, 9 and 10), I built pagestophpages.sql and added it to the pages sub-directory. It's not a standard django thing, but each test copy and each production copy of the database was going to need the same set of steps. UPDATE django_contecnt_type SET app_label='phpages' WHERE app_label='pages'; UPDATE django_migrations SET app='phpages' WHERE app='pages'; Now... in your database (my is PostgreSQL) there will be a bunch of tables that start with "pages". You need to list all of these. In PostgreSQL, in addition to tables, there will be sequences for each AutoField. For each table construct ALTER TABLE pages_something RENAME TO phpages_something; For each sequence ALTER SEQUENCE pages_something_id_seq RENAME TO phpages_something_id_seq; You should probably backup the database. You may need to try this a few times. Run your SQL script through your database shell. Note that all other changes can be propagated by source code control (git, svn, etc). This last step must be run on each and every database. Obviously, you need to change pages and phpages to your stuff. You may have more than one table with one auto field and it may not be named something. Another thing of note, in terms of process, is that this is probably a hard point in your development where everything needs be in sync. Given that we're playing with editing migrations and changing names, you need a hard stop in development so that everything that's going to be changed (dev box, test box, staging box, production box ... and all of their databases) is at the same revision and schema. YMMV. This is also solving the problem by using the label field of class Application. I choose this method in deference to changing the directory name because it involved fewer changes. I chose not to change the name field because that did not work for me. YMMV. I must say that I'm a little disappointed that myapp/pages conflicts with mezzanine.pages. It looks like some of the reasons are due to the pages slug being used in the database table name (and off top of my head, I don't see a good solution there). What I don't see that would make sense is the equivalent to "from mezzanine import pages as mpages" or somesuch. The ability to alias imported apps (not talking about apps in my own file tree). I think this might be possible if I sucked in the app into my own file tree --- but this doesn't seem to be a sanctioned act, either.
1
1
0
So... I've done a lot of research on this... there are answers, but not complete or appropriate answers. I have an in-use and in-production django "project" in which the "main" application is called "pages" ... for reasonably dumb reasons. My problem is now to add mezzanine ... which has a sub-module mezzanine.pages (seems to be required .... but I'm pretty sure I need it). mezzanine.pages apparently conflicts with "pages" ... Now ... my pages contains a slew of non-trivial models including one that extends user (One-to-One ref), and many references to other app's tables (fortunately only outbound, ForeignKey). It also has management/commands and about 20 migrations of it's own history. I gather I either have to changes pages to mypages or is there another route (seemingly changing mezzanine.pages seems wrong-headed). for reference, The project is on Django 1.8 right now, so the preferred answer includes migrations.
Django Migrate Change of App Name (active project)
1.2
0
0
1,272
42,060,186
2017-02-06T04:08:00.000
2
0
1
0
python,gspread
42,066,652
1
true
0
0
You can access a spreadsheet key with mySpreadSheet.id after you have opened it by title.
1
2
0
how do I get the key of the workbook if I know only the name of the workbook? I can use open by title, but once i'm in I didn't find a get.key type method in the documents. Is there a way to get the key by only knowing the title?
gspread get key once opened by title
1.2
0
0
140
42,061,730
2017-02-06T06:39:00.000
1
0
0
0
python,statistics
42,061,940
1
true
0
0
In any given data set, labeling variables as dependent or independent is arbitrary -- there is no fundamental reason that one column should be independent and another should be dependent. That said, typically it's conventional to say that "causes" are independent variables and "effects" are dependent variables. But this business about causes and effects is arbitrary too -- often enough there are several interacting variables, with each of the them "causing" the others, and each of them "affected" by the others. The bottom line is that you should assign dependent and independent according to what you're trying to achieve. What is the most interesting or most useful variable in your data? Typically if that one is missing or has an unknown value, you'll have to estimate it from the other variables. In that case the interesting variable is the dependent variable, and all others are independent. You'll probably get more interest in this question on stats.stackexchange.com.
1
0
1
I am a little bit confused in the classification of dependent and independent variables in my dataset, on which I need to make a model for prediction. Any insights or how-to's would be very helpful here. Suppose my dataset have 40 variables. In this case, it would be very difficult to classify the variables as independent or dependent. Are there any tests in python which can help us in identifying these?
Statistics: How to identify dependent and independent variables in my dataset?
1.2
0
0
2,782
42,070,138
2017-02-06T14:28:00.000
0
0
0
0
python,regex,database,chess
42,104,038
1
false
0
0
@Ev.Kounis solution is simple and effective, I've used it myself successfully. Most of the time, we only care the top chess players. That's what I did: Created a simple function like @Ev.Jounis suggests I also scanned the player rating. For example, there were several "Carlsen" players in my database, but they wouldn't have FIDE rating over 2700. I also search the other player in the game. If I'm interested in Garry Kasparov, he wouldn't be playing a club game with a 1600 rated opponent. Get a better database. Chessgames and TWIC have better quality than Chessbase. You could try regular expression, but it's unnecessary. There's a simple pattern how a player name would differ: "Carlsen, M" == "Magnus Carlsen" This applies to other players in the database. Save regular expression until you really have to do it.
1
1
1
I am studying a chess database with more than one million games. I am interested in identifying some characteristics of different players. The problem I have is that each single player appears with several identifications. For example, "Carlsen, M.", "Carlsen, Ma", "Carlsen, Magnus" and "Magnus Carlsen" all correspond to player "Magnus Carlsen". Furthermore, there are other players which share Carlsen's last name, but have different names, such as "Carlsen, Ingrid Oen" and "Carlsen, Jesper". I need to identify all the different names in the database which correspond to each specific player and combine them. Is there any way to do that with Python?
Combining different names in a database
0
0
0
78
42,072,399
2017-02-06T16:16:00.000
0
0
1
0
python,python-docx
42,074,715
1
true
0
0
There is no "centralized authority" in a Word document of what Fonts have been used. You'll need to parse through the full document and detect them yourself. Runs are the right place to look, but you'll also need to check styles, both paragraph and character styles. Also, to be thorough, you'll need to check the document default font.
1
0
0
I am using Python docx 0.8.5 I can't seem to be able to figure out how to get a list of typeface and sizes used in a document There is a Font object, accessible on Run.font but I can't handle this problem. Can somebody please point me to an example? Thanks
get a list of typeface and sizes used in a docx
1.2
0
0
51
42,074,639
2017-02-06T18:22:00.000
1
0
1
0
python,indexing
42,074,741
3
false
0
0
The best way to solve this problem is to use a dictioary instead of a list. A dictionary is defined as a pair of key:value and you can run a simple line of code to return the corresponding key of avalue in the dictionary
1
2
0
I have a list of lists that looks like this: [['A', 35], ['B', 74], ['C', 21], ['D', 2]] I want to find the first part of the list based on the second part. For example, I know I want to get 'C' just by using 21. I know the second part (21) and want to use it to get the first part ('C'). What's the best way to do this?
finding the associated value in a list of lists
0.066568
0
0
45
42,078,221
2017-02-06T22:13:00.000
2
0
1
0
python,numpy,scipy
42,078,393
2
true
0
0
As documentation say, when you use random.seed you have two options: random.seed() - seeds from current time or from an operating system specific randomness source if available random.seed(a) - hash(a) is used instead as seed Using time as seed is better practice if you want to have different numbers between two instances of your program, but for sure is much harder to debug. Using hardcoded number as seed makes your random numbers much more predictable.
1
0
0
I am using irand=randrange(0,10) to generate random numbers in a program. This random number generator is used multiple times in the code. At the beginning of the code, I initiate the seed with random.seed(1234). Is this the right practice?
Seeding Python's random number generator
1.2
0
0
1,312
42,079,675
2017-02-07T00:23:00.000
0
0
1
0
python-2.7
42,079,696
1
false
0
0
The .pyc files are not readable by humans - the python interpreter compiles the source code to these files, and they are used by the python virtual machine. You can delete these files, and when you run the .py file again, you will see a new .pyc file created.
1
0
0
I was recently trying to make my own module when I realised a copy of my module had been made but instead of ending in .py like the origional, it ended in .pyc. When I opened it, I could not understand a thing. I was using the import to make a game from pygame and the fact that the .pyc file had a bunch of question marks and weird symbols seemed to be helpful for hackers if I ever make a game good enough for release which probably wont happen. I just want to know a few things about these files: Can other computers that download the game still read the module if I delete the original and only leave the weirder .pyc file? Are they readable by humans and can they actually prevent hacks on downloaded game? (its not online I just don't want a easy game for people who know python) Should I get rid of them for what I am doing? (I saw other questions asking how to do that but the answers said it was helpful) Last but not least, will it work for .txt files (will they not just be read as a bunch of symbols)? Thanks!
are there limitations on .pyc files?
0
0
0
66
42,079,848
2017-02-07T00:44:00.000
0
0
0
0
python,web-scraping,session-cookies,vpn
42,079,994
1
false
1
0
HTTP 400 is returned, if the request is malformed. You should inspect the request being made, when you get the error. Perhaps, it is not properly encoded. VPN should not cause an HTTP 400.
1
1
0
I am scraping data from peoplefinders.com a website which is not accesible from my home country so I am basically using a vpn client. I login to this website with a session post and through the same session I get items from different pages of the same website. The problem is that I do scraping in a for loop with get requests but for some reason I receive response 400 error after a several iterations. The error occurs after scraping 4-5 pages on average. Is it due to fact that I am using a vpn connection ? Doesn't all requests from the same session contains same cookies and hence allow me to keep logged in while scraping different pages of the same website ? Thank You
Does using a vpn interrupts python sessions requests which are using the same cookies over and over?
0
0
1
397
42,080,598
2017-02-07T02:19:00.000
0
0
0
0
java,python,rest,machine-learning,scikit-learn
69,476,803
6
false
1
0
I have been experimenting with this same task and would like to add another option, not using a REST API: The format of the Apache Spark models is compatible in both the Python and Jave implementations of the framework. So, you could train and build your model in Python (using PySpark), export, and import on the Java side for serving/predictions. This works well. There are, however, some downsides to this approach: Spark has two separate ML packages (ML and MLLib) for different data formats (RDD and dataframes) The algorithms for training models in each of these packages are not the same (no model parity) The models and training classes don't have uniform interfaces. So, you have to be aware of what the expected format is and might have to transform your data accordingly for both training and inference. Pre-processing for both training and inference has to be the same, so you either need to do this on the Python side for both stages or somehow replicate the pre-processing on the Java side. So, if you don't mind the downsides of a Rest API solution (availability, network latency), then this might be the preferable solution.
3
5
1
I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python. Now I have a use case where in I want to expose a REST api which builds Machine Learning Model, and another REST api which makes the prediction. What architecture should help me to achieve the same. (An example of the same maybe a Amazon Machine Learning. They have exposed REST api for generating model and making prediction) I searched round the internet and found following ways: Write the whole thing in Java - ML model + REST api Write the whole thing in Python - ML model + REST api But playing around with Machine Learning, its models and predictions is really easier and more supported in python with libraries like sklearn, rather than Java. I would really like to use python for Machine Learning part. I was thinking about and approach wherein I write REST api using JAVA but use sub-process to make python ML calls. Will that work? Can someone help me regarding the probable architectural approaches that I can take. Also please suggest the most feasible solution. Thanks in advance.
What is the best way to build and expose a Machine Learning model REST api?
0
0
0
5,677
42,080,598
2017-02-07T02:19:00.000
0
0
0
0
java,python,rest,machine-learning,scikit-learn
46,918,647
6
false
1
0
I'm using Node.js as my rest service and I just call out to the system to interact with my python that holds the stored model. You could always do that if you are more comfortable writing your services in JAVA, just make a call to Runtime exec or use ProcessBuilder to call the python script and get the reply back.
3
5
1
I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python. Now I have a use case where in I want to expose a REST api which builds Machine Learning Model, and another REST api which makes the prediction. What architecture should help me to achieve the same. (An example of the same maybe a Amazon Machine Learning. They have exposed REST api for generating model and making prediction) I searched round the internet and found following ways: Write the whole thing in Java - ML model + REST api Write the whole thing in Python - ML model + REST api But playing around with Machine Learning, its models and predictions is really easier and more supported in python with libraries like sklearn, rather than Java. I would really like to use python for Machine Learning part. I was thinking about and approach wherein I write REST api using JAVA but use sub-process to make python ML calls. Will that work? Can someone help me regarding the probable architectural approaches that I can take. Also please suggest the most feasible solution. Thanks in advance.
What is the best way to build and expose a Machine Learning model REST api?
0
0
0
5,677
42,080,598
2017-02-07T02:19:00.000
0
0
0
0
java,python,rest,machine-learning,scikit-learn
42,127,532
6
false
1
0
Well it depends the situation you use python for ML. For classification models like randomforest,use your train dataset to built tree structures and export as nested dict.Whatever the language you uesd,transform the model object to a kind of data structure then you can ues it anywhere. BUT if your situation is a large scale,real-timeing,distributional datesets,far as I know,maybe the best way is to deploy the whole ML process on severs.
3
5
1
I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python. Now I have a use case where in I want to expose a REST api which builds Machine Learning Model, and another REST api which makes the prediction. What architecture should help me to achieve the same. (An example of the same maybe a Amazon Machine Learning. They have exposed REST api for generating model and making prediction) I searched round the internet and found following ways: Write the whole thing in Java - ML model + REST api Write the whole thing in Python - ML model + REST api But playing around with Machine Learning, its models and predictions is really easier and more supported in python with libraries like sklearn, rather than Java. I would really like to use python for Machine Learning part. I was thinking about and approach wherein I write REST api using JAVA but use sub-process to make python ML calls. Will that work? Can someone help me regarding the probable architectural approaches that I can take. Also please suggest the most feasible solution. Thanks in advance.
What is the best way to build and expose a Machine Learning model REST api?
0
0
0
5,677
42,080,763
2017-02-07T02:39:00.000
1
0
0
0
python,tkinter
42,080,815
1
false
0
1
No, you cannot rotate text on a canvas. From the canonical documentation: Individual items may be moved or scaled using widget commands described below, but they may not be rotated.
1
0
0
In Tkinter, I need to be able to move text in a certain degrees, not to a new x and y coordinate. is there any way I could do this? Any help is greatly appreciated!
Tkinter c.move text with degrees?
0.197375
0
0
116
42,081,202
2017-02-07T03:27:00.000
0
0
1
0
python,azure,azure-blob-storage,azure-machine-learning-studio
42,081,531
2
false
0
0
yes, you should be able to do that using Python. At the very least, straight REST calls should work.
1
0
1
Is it possible to import images from your Azure storage account from within a Python script module as opposed to using the Import Images module that Azure ML Studio provides. Ideally I would like to use cv2.imread(). I only want to read in grayscale data but the Import Images module reads in RGB. Can I use the BlockBlobService library as if I were calling it from an external Python script?
Importing images Azure Machine Learning Studio
0
0
0
775
42,081,790
2017-02-07T04:33:00.000
2
0
0
0
python,pandas,dataframe
42,081,957
1
true
0
0
altering @VaishaliGarg's answer a little, you can use df.groupby(['Qgender','Qmajor']).count() Also if needed a dataframe out of it, we need to add .reset_index() since it would be a groupbyObject. df.groupby(['Qgender','Qmajor']).count().reset_index()
1
1
1
Sorry about the vague title, but I didn't know how to word it. So I have a pandas dataframe with 3 columns and any amount of rows. The first column is a person's name, the second column is their major (six possible majors, always written the same), and the third column is their gender (always 'Male' or 'Female'). I was told to print out the number of people in each major, which I was able to accomplish by saying table.Qmajor.value_counts() (table being my dataframe variable name). Now I am being asked to print the amount of males and females in each major, and I have no idea where to start. Any help is appreciated. The column names are Qnames, Qmajor, and Qgender.
Pandas dataframe: Listing amount of people per gender in each major
1.2
0
0
5,858
42,086,214
2017-02-07T09:32:00.000
1
0
0
0
python,tensorflow,pycharm
44,600,199
1
false
0
0
I have run into a similar error running caffe on pycharm. I think it's because of the version of Python. When I installed Python 2.7.13, it worked!
1
1
1
When I ran a script in PyCharm, it exited with: I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally DEBUG:tm._add: /joy, sensor_msgs/Joy, sub Fatal Python error: (pygame parachute) Segmentation Fault Process finished with exit code 134 (interrupted by signal 6: SIGABRT) But when I debug it in PyCharm, the program ran without any problem. Also, if I ran the script in ubuntu terminal, no problem occurs, too. Why does this happen, or how can I debug this problem?
pycharm cannot run script but can debug it
0.197375
0
0
1,056
42,087,789
2017-02-07T10:46:00.000
1
0
1
0
python,spyder
42,121,732
2
false
0
0
(Spyder developer here) We're aware of these problems in the Python console, but unfortunately we don't know how to fix them. Please use the IPython console instead because the Python console is going to be removed in Spyder 3.2.
1
0
0
I am currently using Spyder and have been for a long time, however I downloaded anaconda recently and started using Spyder for Python 3.5 which gives me several problems. Whenever I run a script in the Python Console, I have to run it twice and then when I am finished running it and want to run a new I have to kill the current process and reload it. I am currently using some scripts with threading, but that never used to be a problem before I upgraded, anyone have similar experiences and know how to fix it?
Spyder IDE environment in Python
0.099668
0
0
502
42,089,045
2017-02-07T11:44:00.000
-1
0
1
0
python,pymongo
63,628,554
2
false
0
0
Try removing all white space in the files (\n, spaces outside string quotes). It may work like miracle
1
5
0
I want to insert document to the collection from json file it says bson.errors.InvalidDocument: key '$oid' must not start with '$' How can I solve it? example of document: [{"name": "Company", "_id": {"$oid": "1234as123541gsdg"}, "info": {"email": "[email protected]"}}]
bson.errors.InvalidDocument: key '$oid' must not start with '$' trying to insert document with pymongo
-0.099668
1
0
9,580
42,089,856
2017-02-07T12:23:00.000
2
0
1
0
python,caching
42,091,305
3
true
0
0
There is no magic possible there - ou want to store a value, so you need a place to store your value. You can't just decide "I won't have an extra entry on my __slots__ because it is not elegant" - you don't need to call it _cached: give it whatever name you want, but these cached values are something you want to exist in each of the object's instances, and therefore you need an attribute. You can cache in a global (module level) dictionary, in which the keys are id(self) - but that would be a major headache to keep synchronized when instances are deleted. (The same thing is true for a class-level dictionary, with the further downside of it still be visible on the instance). TL;DR: the "one and obvious way to do it" is to have a shadow attribute, starting with "_" to keep the values you want cached, and declare these in __slots__. (If you use a _cached dictionary per instance, you loose the main advantage from __slots__, that is exactly not needing one dictionary per instance).
2
2
0
I am trying to cache a computationally expensive property in a class defined with the __slots__ attribute. Any idea, how to store the cache for later use? Of course the usual way to store a dictionary in instance._cache would not work without __dict__ being defined. For several reasons i do not want to add a '_cache' string to __slots__. I was thinking whether this is one of the rare use cases for global. Any thoughts or examples on this matter?
Python caching attributes in object with __slots__
1.2
0
0
992
42,089,856
2017-02-07T12:23:00.000
-2
0
1
0
python,caching
42,089,927
3
false
0
0
Something like Borg pattern can help. You can alterate the status of your instance in the __init__ or __new__ methods.
2
2
0
I am trying to cache a computationally expensive property in a class defined with the __slots__ attribute. Any idea, how to store the cache for later use? Of course the usual way to store a dictionary in instance._cache would not work without __dict__ being defined. For several reasons i do not want to add a '_cache' string to __slots__. I was thinking whether this is one of the rare use cases for global. Any thoughts or examples on this matter?
Python caching attributes in object with __slots__
-0.132549
0
0
992
42,089,967
2017-02-07T12:30:00.000
1
0
0
0
python,django
42,090,129
2
true
1
0
If you are using Django Rest Framework, then you can simply use serializers. But I don't think that is a case. What you want to accomplish seems very similar to the role of django forms, but as such they are only used (conventionally) for saving/updating models i.e. POST requests. Now either you can define a new class for filtering/rendering and use that in your view or just go ahead and use django forms which would automatically provide basic cleaning for different fields.
1
2
0
My django app displays the objects from database in table view. The problem is that these objects (models) are pretty complex: the have 50+ fields. Nearly for each field I have to do some formatting: conver phone numbers from int 71234567689 to "+7 (123) 456789" display long prices with spaces: "7 000 000" instead of "7000000" construct full address from several fields like "street", "house" and so on (logic if pretty complex with several if-else-s) and so on Django templating language has several useful tags for simple cases but I guess is not suitable in general case (like mine) for serious formatting. Create the @property-s in model class is also not an option because the question is about rendering and is not related to model. So I guess I should do my conversions in view: create dict for each obj, fill with converted data and pass to template. But! The model has a lot of fields and I don't want to copy them all :) Moreover, it would be great to preserve model structure to use it in django template (say, regroup) and query set laziness. So the greatest way would be to instruct django "how to render". Is it possible?
Django: best way to convert data from model to view
1.2
0
0
392
42,092,448
2017-02-07T14:32:00.000
6
0
0
0
python,machine-learning,scikit-learn,knn
42,093,881
3
false
0
0
That's a pretty good question, and is unexpected at first glance because usually a normalization will help a KNN classifier do better. Generally, good KNN performance usually requires preprocessing of data to make all variables similarly scaled and centered. Otherwise KNN will be often be inappropriately dominated by scaling factors. In this case the opposite effect is seen: KNN gets WORSE with scaling, seemingly. However, what you may be witnessing could be overfitting. The KNN may be overfit, which is to say it memorized the data very well, but does not work well at all on new data. The first model might have memorized more data due to some characteristic of that data, but it's not a good thing. You would need to check your prediction accuracy on a different set of data than what was trained on, a so-called validation set or test set. Then you will know whether the KNN accuracy is OK or not. Look into learning curve analysis in the context of machine learning. Please go learn about bias and variance. It's a deeper subject than can be detailed here. The best, cheapest, and fastest sources of instruction on this topic are videos on the web, by the following instructors: Andrew Ng, in the online coursera course Machine Learning Tibshirani and Hastie, in the online stanford course Statistical Learning.
2
7
1
I had trained my model on KNN classification algorithm , and I was getting around 97% accuracy. However,I later noticed that I had missed out to normalise my data and I normalised my data and retrained my model, now I am getting an accuracy of only 87%. What could be the reason? And should I stick to using data that is not normalised or should I switch to normalized version.
Accuracy difference on normalization in KNN
1
0
0
8,817
42,092,448
2017-02-07T14:32:00.000
2
0
0
0
python,machine-learning,scikit-learn,knn
42,093,691
3
false
0
0
If you use normalized feature vectors, the distances between your data points are likely to be different than when you used unnormalized features, particularly when the range of the features are different. Since kNN typically uses euclidian distance to find k nearest points from any given point, using normalized features may select a different set of k neighbors than the ones chosen when unnormalized features were used, hence the difference in accuracy.
2
7
1
I had trained my model on KNN classification algorithm , and I was getting around 97% accuracy. However,I later noticed that I had missed out to normalise my data and I normalised my data and retrained my model, now I am getting an accuracy of only 87%. What could be the reason? And should I stick to using data that is not normalised or should I switch to normalized version.
Accuracy difference on normalization in KNN
0.132549
0
0
8,817
42,096,280
2017-02-07T17:31:00.000
11
0
1
0
python,python-3.x,anaconda
53,784,519
3
false
0
0
Anaconda is a Python-based data processing and scientific computing platform. It has built in many very useful third-party libraries. Installing Anaconda is equivalent to automatically installing Python and some commonly used libraries such as Numpy, Pandas, Scrip, and Matplotlib, so it makes the installation so much easier than regular Python installation. If you don't install Anaconda, but instead only install Python from python.org, you also need to use pip to install various libraries one by one. It is painful and you need to consider compatibility, thus it is highly recommended to directly install Anaconda.
1
129
0
I am a beginner and I want to learn computer programming. So, for now, I have started learning Python by myself with some knowledge about programming in C and Fortran. Now, I have installed Python version 3.6.0 and I have struggled finding a suitable text for learning Python in this version. Even the online lecture series ask for versions 2.7 and 2.5 . Now that I have got a book which, however, makes codes in version 2 and tries to make it as close as possible in version 3 (according to the author); the author recommends "downloading Anaconda for Windows" for installing Python. So, my question is: What is this 'Anaconda'? I saw that it was some open data science platform. What does it mean? Is it some editor or something like Pycharm, IDLE or something? Also, I downloaded my Python (the one that I am using right now) for Windows from Python.org and I didn't need to install any "open data science platform". So what is this happening? Please explain in easy language. I don't have too much knowledge about these.
How is Anaconda related to Python?
1
0
0
112,856
42,096,970
2017-02-07T18:11:00.000
0
0
0
0
python,postgresql,uuid
51,623,094
2
false
0
0
The database is throwing an error because you're trying to match in a UUID-type column with a query that doesn't contain a valid UUID. This doesn't happen with integer or string queries because leaving off the last character of those does result in a valid integer or string, just not the one you probably intended. You can either prevent passing invalid UUIDs to the database by validating your input (which you should be doing anyway for other reasons) or somehow trap on this error. Either way, you'll need to present a human-readable error message back to the user. Also consider whether users should be typing in URLs with UUIDs in the first place, which isn't very user-friendly; if they're just clicking links rather than typing them, as usually happens, then how did that error even happen? There's a good chance that it's an attack of some sort, and you should respond accordingly.
1
0
0
I'm building a platform with a PostgreSQL database (first time) but I've experience with Oracle and MySQL databases for a few years now. My question is about the UUID data type in Postgres. I am using an UUIDv4 uuid to indentify a record in multiple tables, so the request to /users/2df2ab0c-bf4c-4eb5-9119-c37aa6c6b172 will respond with the user that has that UUID. I also have an auto increment ID field for indexing. My query is just a select with a where clause on UUID. But when the user enters an invalid UUID like this 2df2ab0c-bf4c-4eb5-9119-c37aa6c6b17 (without the last 2) then the database responds with this error: Invalid input syntax for UUID. I was wondering why it returned this because when you select on a integer-type with a string-type it does work. Now I need to set a middleware/check on each route that has an UUID-type parameter in it because otherwise the server would crash. Btw I'm using Flask 0.12 (Python) and PostgreSQL 9.6
PostgreSQL UUID date type
0
1
0
1,782
42,097,052
2017-02-07T18:16:00.000
18
0
1
0
python,string,f-string
42,097,136
8
true
0
0
Unfortunatly if you want to use it you must require Python 3.6+, same with the matrix multiplication operator @ and Python 3.5+ or yield from (Python 3.4+ I think) These made changes to how the code is interpreted and thus throw SyntaxErrors when imported in older versions. That means you need to put them somewhere where these aren't imported in older Pythons or guarded by an eval or exec (I wouldn't recommend the latter two!). So yes, you are right, if you want to support multiple python versions you can't use them easily.
2
57
0
The new Python 3.6 f-strings seem like a huge jump in string usability to me, and I would love to jump in and adopt them whole heartedly on new projects which might be running on older interpreters. 2.7, 3.3-3.5 support would be great but at the very least I would like to use these in Python 3.5 code bases. How can I import 3.6's formatted string literals for use by older interpreters? I understand that formatted string literals like f"Foo is {age} {units} old" are not breaking changes, so would not be included in a from __future__ import ... call. But the change is not back-ported (AFAIK) I would need to be sure that whatever new code I write with f-strings is only ran on Python 3.6+ which is a deal breaker for a lot of projects.
Can I import Python's 3.6's formatted string literals (f-strings) into older 3.x, 2.x Python?
1.2
0
0
30,101
42,097,052
2017-02-07T18:16:00.000
0
0
1
0
python,string,f-string
59,328,337
8
false
0
0
Using dict() to hold name-value pairs In addition to the approaches mentioned elsewhere in this thread (such as format(**locals()) ) the developer can create one or more python dictionaries to hold name-value pairs. This is an obvious approach to any experienced python developer, but few discussions enumerate this option expressly, perhaps because it is such an obvious approach. This approach is arguably advantageous relative to indiscriminate use of locals() specifically because it is less indiscriminate. It expressly uses one or more dictionaries a namespace to use with your formatted string. Python 3 also permits unpacking multiple dictionaries (e.g., .format(**dict1,**dict2,**dict3) ... which does not work in python 2.7) ## init dict ddvars = dict() ## assign fixed values ddvars['firname'] = 'Huomer' ddvars['lasname'] = 'Huimpson' ddvars['age'] = 33 pass ## assign computed values ddvars['comname'] = '{firname} {lasname}'.format(**ddvars) ddvars['reprself'] = repr(ddvars) ddvars['nextage'] = ddvars['age'] + 1 pass ## create and show a sample message mymessage = ''' Hello {firname} {lasname}! Today you are {age} years old. On your next birthday you will be {nextage} years old! '''.format(**ddvars) print(mymessage)
2
57
0
The new Python 3.6 f-strings seem like a huge jump in string usability to me, and I would love to jump in and adopt them whole heartedly on new projects which might be running on older interpreters. 2.7, 3.3-3.5 support would be great but at the very least I would like to use these in Python 3.5 code bases. How can I import 3.6's formatted string literals for use by older interpreters? I understand that formatted string literals like f"Foo is {age} {units} old" are not breaking changes, so would not be included in a from __future__ import ... call. But the change is not back-ported (AFAIK) I would need to be sure that whatever new code I write with f-strings is only ran on Python 3.6+ which is a deal breaker for a lot of projects.
Can I import Python's 3.6's formatted string literals (f-strings) into older 3.x, 2.x Python?
0
0
0
30,101
42,097,562
2017-02-07T18:45:00.000
3
0
0
0
python,amazon-web-services,amazon-dynamodb,aws-lambda,race-condition
42,102,617
1
true
1
0
Instead of deleting the hostname from DynamoDB, why not lock the hostname in DynamoDB?. If each item in DynamoDB corresponds to a unique hostname, then you can use a conditional write like the following and only try to acquire a hostname if it is not already acquired. You condition on the instanceid attribute Unused hostname: {hostname: 'tom-sawyer'} UpdateItem to do a conditional write on {hostname: 'tom-sawyer'} where the condition is attribute_not_exists(instanceid) and the update expression is SET instanceid = :instanceid and the ExpressionAttributeValues map is {:instanceid: 'deadbeef'}. Basically, you only allow DynamoDB to assign an instance to a hostname when it does not have an instanceid set. Used hostname: {hostname: 'tom-sawyer', 'instanceid'='deadbeef'} UpdateItem to do a conditional write on {hostname: 'tom-sawyer'} where the condition is attribute_exists(instanceid) AND instanceid = :instanceid and the update expression is REMOVE instanceid. Basically, you only allow DynamoDB to un-assign a specific instance when the instance id being removed is set and matches the record for that hostname.
1
0
0
Workflow : I have a python AWS lambda function that basically looks up a pool of hostnames in dynamo DB (json) and attaches one of them to an instance(that spins up) and then deletes that hostname from dynamo db so as not be used again for another instance. Problem : As soon as instance spins up it sends a notification to SNS service that triggers lambda to assign it a hostname from available hostnames. There are times when multiple instances come up together and they both trigger the same lambda function simultaneously (2 threads). Their could be a race condition where both functions are looking at the dynamo db for available hostnames and sign the same one. How do I resolve this problem ? Any ideas ?
Race condition with AWS Lambda
1.2
0
0
3,045
42,097,944
2017-02-07T19:07:00.000
0
0
1
0
java,python,timestamp
42,098,334
2
false
1
0
Note the differences: time.time() returns the time in seconds since the epoch as a floating point number, according to the timezone defined. System.currentTimeMillis() returns the time in milliseconds since the epoch as a long number, in UTC timezone. So to compare the two you need to: Convert the python time to UTC, e.g. by adding time.timezone to the result of time.time() Convert the adjusted python time from seconds to milliseconds by multiplying by 1000 and rounding/ceil/floor the result. After that you have both times in the same unit and timezone.
1
2
0
I am working on multiple projects in python and java. I have a timestamp from python project as time.time(). I need to compare it with current timestamp in my java project as System.Currenttimemillis(). How to compare time in time.time() and System.Currenttimemillis()?
compare time.time() & System.Currenttimemillis()
0
0
0
1,633
42,101,552
2017-02-07T22:54:00.000
0
0
0
1
python-3.x,networking,nginx,docker
42,114,253
2
false
1
0
It's not a good thing to put a lot of applications into one container, normally you should split that with one container per app, it's the way it should be used. But if you absolutly want to use many apps into one container you can use proxy or write a dockerfile that will open your ports itself.
1
0
0
I am trying to put an application that listens to several ports inside a Docker image. At the moment, I have one docker image with a Nginx server with the front-end and a Python app: the Nginx runs on the port 27019 and the app runs on 5984. The index.html file listens to localhost:5984 but it seems like it only listens to it outside the container (on the localhost of my computer). The only way I can make it work at the moment is by using the -p option twice in the docker run: docker run -p 27019:27019 -p 5984:5984 app-test. Doing so, I generate two localhost ports on my computer. If I don't put the -p 5984:5984 it doesn't work. I plan on using more ports for the application, so I'd like to avoid adding -p xxx:xxx for each new port. How can I make an application inside the container (in this case the index.html at 27019) listens to another port inside the same container, without having to publish both of them? Can it be generalized to more than two ports? The final objective would be to have a complete application running on a single port on a server/computer, while listening to several ports inside Docker container(s).
How to access several ports of a Docker container inside the same container?
0
0
0
42
42,103,374
2017-02-08T01:58:00.000
1
1
0
1
python,python-2.7,salt,salt-stack,salt-cloud
42,263,855
1
false
0
0
The salt packages are built using the system python and system site-packages directory. If something doesn't work right, file a bug with salt. You should avoid overwriting the stock python, as that will result in a broken system in many ways.
1
1
0
I am trying to setup a salt-master/salt-cloud on Centos 7. The issue that I am having is that I need Python 2.7.13 to use salt-cloud to clone vm in vmware vcenter (uses pyvmomi). CentOS comes with Python 2.7.5 which salt has a known issue with (SSL doesn't work). I have tried to find a configuration file on the machine to change which python version it should use with no luck. I see two possible fixes here, somehow overwrite the python 2.7.5 with 2.7.13 so that it is the only python available. OR If possible change the python path salt uses. Any Ideas on how to do either of these would be appreciated? (Or another solution that I haven't mentioned above?)
How to change Default Python for Salt in CentOS 7?
0.197375
0
0
1,084
42,104,540
2017-02-08T04:10:00.000
35
0
1
0
python,regex,python-3.x,replace,capturing-group
42,104,650
4
true
0
0
Because it's supposed to replace the whole occurrence of the pattern: Return the string obtained by replacing the leftmost non-overlapping occurrences of the pattern in string by the replacement repl. If it were to replace only some subgroup, then complex regexes with several groups wouldn't work. There are several possible solutions: Specify pattern in full: re.sub('ab', 'ad', 'abc') - my favorite, as it's very readable and explicit. Capture groups which you want to preserve and then refer to them in the pattern (note that it should be raw string to avoid escaping): re.sub('(a)b', r'\1d', 'abc') Similar to previous option: provide a callback function as repl argument and make it process the Match object and return required result. Use lookbehinds/lookaheds, which are not included in the match, but affect matching: re.sub('(?<=a)b', r'd', 'abxb') yields adxb. The ?<= in the beginning of the group says "it's a lookahead".
1
27
0
re.sub('a(b)','d','abc') yields dc, not adc. Why does re.sub replace the entire capturing group, instead of just capturing group'(b)'?
Why does re.sub replace the entire pattern, not just a capturing group within it?
1.2
0
0
26,730
42,105,716
2017-02-08T05:55:00.000
0
0
0
0
python-3.x,pandas,matplotlib,pyspark,apache-spark-sql
42,110,000
1
false
0
0
Check out the hortonworks sandbox. It's a virtual machine with hadoop and all its components - such as spark ad hdfs - installed and configured. In a addition to that, there is a note book called Zeppelin notebook allowing you to write script in python or other languages. You're also free to install python libs and access them through the notebook, even though i'm pretty it comes with it's own data visualisation. Note that the spark dataframe type is not compatible with the pandas one. you'll have to convert your data to a simple matrix and integrate back to spark or pandas type.
1
0
1
I want to know the which interpreter is good for Python to use features like Numpy, pandas and matplotlib with the feature of integrated Ipython note book. Also I want to integrate this with Apache Spark. Is it possible? My aim is I need to load different tables from different sources like Oracle, MS SQL, and HDFS files and need to transform them using Pyspark, SparkSQL. And then I want to use the pandas/matplolib for manipulation and visualization.
Integrate Spark SQL using Pyspark with python interpreter and pandas and Ipython notebook
0
0
0
186
42,105,805
2017-02-08T06:03:00.000
0
0
1
0
python,api,microservices
42,114,560
2
false
0
0
Api gateway is not needed for Internal service to service communication But, you need a service registry or some kind of dynamic load balancing mechanism to reach the services
1
0
0
I would like to know how to create a communication for each services. I am using API Gateway for the outside of the system to communicate with the services within. Is it necessary for a service to call another service through API Gateway or just directly into the service itself ? Thank You
Microservices Communication Design
0
0
1
3,274
42,108,324
2017-02-08T08:40:00.000
0
0
0
0
python,machine-learning,scikit-learn,decision-tree
42,115,789
2
false
0
0
In general - no. Decision trees work differently that that. For example it could have a rule under the hood that if feature X > 100 OR X < 10 and Y = 'some value' than answer is Yes, if 50 < X < 70 - answer is No etc. In the instance of decision tree you may want to visualize its results and analyse the rules. With RF model it is not possible, as far as I know, since you have a lot of trees working under the hood, each has independent decision rules.
1
0
1
I have trained my model on a data set and i used decision trees to train my model and it has 3 output classes - Yes,Done and No , and I got to know the feature that are most decisive in making a decision by checking feature importance of the classifier. I am using python and sklearn as my ML library. Now that I have found the feature that is most decisive I would like to know how that feature contributes, in the sense that if the relation is positive such that if the feature value increases the it leads to Yes and if it is negative It leads to No and so on and I would also want to know the magnitude for the same. I would like to know if there a solution to this and also would to know a solution that is independent of the algorithm of choice, Please try to provide solutions that are not specific to decision tree but rather general solution for all the algorithms. If there is some way that would tell me like: for feature x1 the relation is 0.8*x1^2 for feature x2 the relation is -0.4*x2 just so that I would be able to analyse the output depends based on input feature x1 ,x2 and so on Is it possible to find out the whether a high value for particular feature to a certain class, or a low value for the feature.
How to know the factor by which a feature affects a model's prediction
0
0
0
998
42,109,930
2017-02-08T10:00:00.000
2
0
0
0
c#,python-2.7,ironpython,roslyn
42,208,014
1
true
1
1
Found. IronPython can use c# classes, using import and change initializer invocation value= new SomeObject { Name = name } to value = SomeObject(Name = name)
1
1
0
Well, need to translate c# code into IronPython. The current problem is to find the best way to traslate initialization like this for example: case SomeObject.FieldCase: new SomeObject { Width = 600, Height = 400 }.Export(model_, stream); break; Do you have any ideas to make it similar? I'm interesting only in object initialization code, case statement was translated. For translation we use Roslyn, so we can get all syntax nodes. In other cases I make smth like that: model = new Model; model.SomeField = field; model.SomeField2 = field2; But this way is not so easy to develop.
Object initialization in IronPython
1.2
0
0
356
42,110,293
2017-02-08T10:16:00.000
0
0
0
0
python,matlab,tensorflow
54,422,825
3
false
0
0
I used a mex function for inference via the C++ API of TensorFlow once. That's pretty straight forward. I had to link the required TensorFlow libs statically from source though.
1
1
1
I want to integrate MATLAB and TensorFlow, although I can run TensorFlow native in python but I am required to use MATLAB for image processing. Can someone please help me out with this one?
How will I integrate MATLAB to TensorFlow?
0
0
0
1,628