Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
46,630,087
2017-10-08T10:31:00.000
1
0
1
0
python-3.x,mod-wsgi
46,630,144
1
false
1
0
Current understanding is that 32 bit builds are broken. That or you are mixing 32 bit and 64 bit versions of Python, Apache or the Windows compilers. They must be all 32 bit or all 64 bit.
1
0
0
I want to install WSGI for python web application on windows-10. But when I run pip install mod_wsgi command i get error. error: command 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\link.exe' failed with exit status 1120 Anybody can resolve this problem?
wsgi_mod installation in windows 10
0.197375
0
0
612
46,630,267
2017-10-08T10:52:00.000
3
0
1
0
python,python-2.7,python-3.x,pip
46,630,308
1
true
0
0
If you have the 2 versions really installed, you should have a pip2 or pip2.x available in your PATH
1
2
0
I would like to ask about how to use pip install for Python 2.7, when I previously installing for and using Python 3.6 ? (*I now have to versions of Python on Windows) pip install ... keeps installing for Python 3.6 I need to use the previous version, to rewrite the code in Python 2.7. (this is for building a Kivy app, although Kivy says it now supports Python 3 but it also says * Warning.) In order to do this, I have to import necessary modules : kivy and numpy. Hope for feedbacks on this, Thanks.
How to python pip install for Python 2.7, having using Python 3.6 before on Windows
1.2
0
0
1,491
46,630,410
2017-10-08T11:08:00.000
3
0
1
0
python,anaconda,conda,antivirus,pythonw
46,630,411
3
true
0
0
Anaconda installs and updates can silently fail due to conflicts with 3rd party antivirus (for me it's WebRoot) programs. An indicator of Anaconda antivirus conflicts is missing .exe and .bat files, and sometimes requests to reboot. The fix is to shutdown the antivirus program and reinstall Anaconda. I suspect Anaconda isn't following correct Windows installer procedures as it's the only installer than conflicts with WebRoot.
2
2
0
Anaconda 2 and 3 are installing without errors. I reboot because the installer prompts me to do so. When I open the Anaconda Prompt, python, pythonw or activate.bat aren't found. Looking in the Anaconda[2|3] folder, I can't find expected .exe and .bat files. What's going wrong? I also notice that conda update --all wants to update many libs and eventually errors out.
Why isn't Anaconda for Windows Installing Properly?
1.2
0
0
6,943
46,630,410
2017-10-08T11:08:00.000
2
0
1
0
python,anaconda,conda,antivirus,pythonw
64,310,075
3
false
0
0
INSTALL IT IN A EXTERNAL DISK! I had a hard time tryig to install anaconda, because it was never complete, always missing anaconda navigator and the prompt. And after a cicle of install/unistall adopting different approaches, the only thing that made anaconda finally work properly in my computer was installing it in a exterior disk (or a pen drive).It just solved my problems, so give it a try!
2
2
0
Anaconda 2 and 3 are installing without errors. I reboot because the installer prompts me to do so. When I open the Anaconda Prompt, python, pythonw or activate.bat aren't found. Looking in the Anaconda[2|3] folder, I can't find expected .exe and .bat files. What's going wrong? I also notice that conda update --all wants to update many libs and eventually errors out.
Why isn't Anaconda for Windows Installing Properly?
0.132549
0
0
6,943
46,634,665
2017-10-08T18:35:00.000
0
0
0
0
python-3.x,python-requests
46,694,608
1
false
0
0
I choose to do requests.head(), inspect the content-type and if the type is something that should be fetched, do requests.get() to get the body. The extra network I/O of fetching headers twice is outweighed by not fetching bodies of other content types.
1
0
0
Using the Python requests library, is there a way to fetch the HTTP response headers and only fetch the body over the network when the Content-Type header is some specific type? I can of course issue a HEAD request, inspect the Content-Type and if the type matches, issue a GET request. But is there a way to avoid fetching the HTTP headers twice?
Fetch content of HTML page using python requests dependig on Content-Type?
0
0
1
78
46,636,125
2017-10-08T21:13:00.000
0
0
0
1
python,terminal
46,636,156
1
false
0
0
As fas as I can understand from your question, you can make the terminal to be fullscreen by pressing F11 (at least in Ubuntu)
1
0
0
how do I write/execute a Python script full screen at the terminal. I want to write a small Programm which shoud be shown like "vim", "sl", or "nano".
Execute Python script in terminal fullscreen
0
0
0
675
46,637,916
2017-10-09T02:06:00.000
1
0
0
0
python
46,638,373
1
true
0
1
Firstly, that doesn't happen on Windows XP (or any common minesweeper implementation) every time. It's very likely to if you are playing on a low difficulty though. There are some ideas though; Generate the map after the first click. This allows you to avoid the area the user clicked, giving you the large swath you desire - simply by tweaking your mine placement algorithm to avoid the area around where the user clicked. Generate the map - but change it if insufficient space would be exposed. This will (probably) result in a faster reaction on the first click as the map will likely be already generated. Don't do this. As mentioned previously - this is not how windows XP worked. But there was a high likelihood of this just happening naturally on easier difficulties. It might be worth recalculating the map if the user clicks on a mine on the first move, but otherwise leave it to your random distribution. Remember that (except some custom modes) there are going to be many more empty squares than ones with mines. Hopefully that will get you started.
1
0
0
I was thinking about creating minesweeper in python/pygame. However, I was stumped when I was trying to figure out a way to guarantee a large swath of empty space on the first move (such as in minesweeper on Windows XP). Does anyone have a method for doing this? I don't want code, just words. Thank you in advance
Exposing a Large Swath of Tiles in Minesweeper
1.2
0
0
50
46,641,080
2017-10-09T07:39:00.000
3
0
1
1
python,linux,windows,command,gpu
46,664,350
1
true
0
0
Following solution: On Linux I'm using lsmod (or /sbin/lsmod; thanks to n00dl3) to see any occurence of "nvidia" and on Windows I'm using wmic path win32_VideoController get name to get some gpu information.
1
1
0
I'm currently writing some integration tests which should run on different physical machines and VMs with different OS. For one type of test I have to find out if an nvidia-graphic card is installed on the running machine. I don't need any other information - only the vendor name (and it would be OK if I only knew if it is an NVIDIA graphic card or not - not interested in other vendors). I can only use the python standard lib so I think the best way is to use subprocesses and using the shell. Are there some commands for Windows(Win10x64) and Linux(Fedora, CentOS, SUSE) (without installing any tools or external libs) to find out the gpu vendor?
Getting gpu vendor name on windows and linux
1.2
0
0
794
46,642,598
2017-10-09T09:05:00.000
0
0
1
0
python,python-3.x,count
46,642,725
2
false
0
0
You can use a variable that will increment when the function calls or stops.
1
0
0
I am trying to figure out how to code how many times my main program has been called. I want to count how many times my program has been called, lets call the program "test". In the program different sub-functions can be called. therefore i want to be able to count those aswell, let's call them "program-1, Program-2 ...etc" Also i want to be able to see how many times the program has been stopped, like how many times a user needed to push the kill-switch. Anyone out there who might have any idea how to do this?
Count number of times my code has been called?
0
0
0
198
46,645,664
2017-10-09T11:44:00.000
0
0
0
0
python,amazon-web-services,amazon-ec2,botocore
46,651,021
1
false
1
0
FYI: The problem is caused by the fact that credentials from another account was used due to our setup with parent and child accounts
1
0
0
When I do something like: ec2_client.describe_images(ImageIds=['ami-123456']) The response I get is missing the 'Tags'. This is not the case when I do the same call using aws cli: aws ec2 describe-images --image-ids ami-123456
botocore - Tags are missing from 'describe_images' response
0
0
1
113
46,646,141
2017-10-09T12:10:00.000
2
0
0
0
python,machine-learning,multilabel-classification
46,646,311
1
true
0
0
Of course it can be done with numbers. After all, the text itself is converted to numbers to be classified. But you should not use regression for that. It is clearly a case for classification. A regular classifier (for example, a neural network) usually has multiple outputs, one for each class. Each output returns the probability that the input vector belongs to that particular class. In standard classification, you assign it to the class with the maximum probability. In your case, just assign it to all the classes for which p > 0.5 (assuming that the output is in [0, 1]. Regarding the question of whether your problem is a multi-regression or multi-classification problem, you can't know that just by looking at the inputs. You decide it based on what you are trying to find. Choose regression if you are trying to find numeric values in a continuous range (for example, predict the price and number of sales for a given product). Choose classification if you have a number of attributes that the input has or doesn't have.
1
0
1
I was working on a numeric dataset and apparently it is a multi variable output regression. I wanted to know if you can have a multi-label classification in a numeric dataset or it is strictly for text based. For Eg: Stackoverflow an categorize every text/code into multiple tags like python,flask, python2.7 ... But can something like that be done with numbers. Sorry I know that this is a noob question but I wanted to know the answer. Thanks in Advance.
is multi-label clasification for text only
1.2
0
0
36
46,648,039
2017-10-09T13:46:00.000
0
0
0
1
python,zos,mvs
46,654,808
1
true
1
0
Working with MVS FTP and JES can be very specific. For example my MVS ID was MVSIDD. My jobcard had a jobname of MVSIDDXY. So the submit_wait_job() function would submit the job correctly and it would run successfully. The problem came with returning the JES output back to FTP. It was expecting a jobname with my id and a single character not two. By changing the jobname in the jobcard to MVSIDDX the function worked as expected and waited until the job was over and then returned all the JES output with it.
1
0
0
I have a python zosftplib function call that submits a MVS job successfully, but it does not recognize that the job completed and it does not receive the JES output from the job. I can successfully make the MVS FTP connection and can upload and download files. The code looks like this: job = Myzftp.submit_wait_job(jcl) The call eventually displays the following error message. File "C:\Python27\lib\site-packages\zosftplib.py", line 410, in submit_wait_job %(msg, resp)) ZftpError: 'submit_wait_job error: 550 JesPutGet aborted, job not found (last response:250 Transfer completed successfully.)' Any suggestions would be helpful on how I can resolve this.
zosftplib submit_wait_job(jcl) function does not receive JES output
1.2
0
0
694
46,649,482
2017-10-09T15:00:00.000
2
1
1
0
python,static
46,649,763
2
false
0
0
If a method does not need access to the current instance, you may want to make it either a classmethod, a staticmethod or a plain function. A classmethod will get the current class as first param. This enable it to access the class attributes including other classmethods or staticmethods. This is the right choice if your method needs to call other classmethods or staticmethods. A staticmethod only gets it's explicit argument - actually it nothing but a function that can be resolved on the class or instance. The main points of staticmethods are specialization - you can override a staticmethod in a child class and have a method (classmethod or instancemethod) of the base class call the overridden version of the staticmethod - and eventually ease of use - you don't need to import the function apart from the class (at this point it's bordering on lazyness but I've had a couple cases with dynamic imports etc where it happened to be handy). A plain function is, well, just a plain function - no class-based dispatch, no inheritance, no fancy stuff. But if it's only a helper function used internally by a couple classes in the same module and not part of the classes nor module API, it's possibly just what you're looking for. As a last note: you can have "some kind of" privacy in Python. Mostly, prefixing a name (whether an attribute / method / class or plain function) with a single leading underscore means "this is an implementation detail, it's NOT part of the API, you're not even supposed to know it exists, it might change or disappear without notice, so if you use it and your code breaks then it's your problem".
1
1
0
I have a class that includes some auxiliary functions that do not operate on object data. Ordinarily I would leave these methods private, but I am in Python so there is no such thing. In testing, I am finding it slightly goofy to have to instantiate an instance of my class in order to be able to call these methods. Is there a solid theoretical reason to choose to keep these methods non-static or to make them static?
Should methods that do not act on object data be made static?
0.197375
0
0
66
46,657,070
2017-10-10T00:51:00.000
1
0
0
1
python,azure,azure-webjobs
46,657,105
2
false
1
0
You might want a second webjob that you can incorporate a healthcheck and restart the primary webjob if it detects no activity in your processes. Another idea could be to use azure automation and have a powershell script that just restarts the webjob every hour.
1
1
0
I have python webjobs running on Azure. There are times when the script hangs because of which I need to force restart it so next iteration can pick it up. How do I configure force stopping a webjob if running longer than 1 hour? Basically I want to mimic task scheduler's behavior. My files on the webjob: run.cmd D:\home\Python35\python.exe main.py main.py just another python file settings.job {"is_singleton":true} At a given time, I want only 1 instance of the job running. Edit (Answer): As a workaround, I changed continuous webjob to triggered one. And added this in app setting: WEBJOBS_IDLE_TIMEOUT = 120 I'm printing something on the console every now and then. If no CPU activity is detected for 2 mins, the job will be aborted.
Azure WebJob: Force stop if running longer than X mins
0.099668
0
0
431
46,661,246
2017-10-10T07:46:00.000
0
0
0
0
python,python-2.7,openerp,odoo-10,erp
58,504,044
2
false
1
0
Thanks for your answer, it really helped me. But when i remove edit/create button for some group (ex: Purchase: User), also the edit/create buttons got removed for the higher groups (ex: Purchase: Manager) of the specified group (ex: Purchase: User). My case: I've removed edit button for Purhcase: User group, and I see the edit button got removed also for the Purchase: Manager group. The solution I tried: I have created one more view for Purchase: Manager group and given edit TRUE. So I've created two views for two groups Looking for better solution to achieve this scenario with single view, as it isn't good to create many views for many groups.
1
3
0
I want to hide edit button based on user group. However I don't want to edit ir.model.access.csv, because some process in my system flow must be able some user group to write the model by code. Are there some way to hide edit button from some group of user ?
Odoo How to hide edit button based on User Group?
0
0
0
2,386
46,661,848
2017-10-10T08:21:00.000
0
0
0
0
android,python-2.7,python-3.x,kivy
52,576,484
2
false
0
1
The fact that it runs means your requirements are probably ok. As mentioned previously, I would update as "android_new" is now "android". That might fix the touch, but the missing image is probably a path issue. I would suggest posting on the kivy forums if you still have issues. "kivy" should be enough for requirements, but also "python2" or "python3crystax" would be good to add to explicitly state which python version you want to use.
2
0
0
I would like to discuss an issue. What can be put to the requirements=... in the buildozer.spec? Is it necessary to put sdl2 and python2 so that the app works fine on the phone? Should it better be built using android_new or android? I have the main.py code that depends on kivy modules and some of its widgets, and also numpy, and some built-in Python2 modules. The app works fine in Windows using Python2 (and also Python3), the app uses three .py files for storing functions and objects. When I deploy the app using buildozer to the phone, the app does not crash..but the touch for button does not work and the Image widget does not show. This is built using buildozer android debug. Thanks.
List of buildozer requirements to build an apk for Kivy-Python app
0
0
0
1,585
46,661,848
2017-10-10T08:21:00.000
0
0
0
0
android,python-2.7,python-3.x,kivy
46,662,406
2
true
0
1
As said in my last comment on your other post, the default buildozer.spec produced by "buildozer init" should be enough to compile a working apk, including image and clickable button. So it's not necessary to add sdl2 or python2 in your requirements. "android_new" or "android"? Now it's called "android" and "android_old", so you might update your buildozer installation so it may resolves your others problems, but when I was using your version, I used "android_new".
2
0
0
I would like to discuss an issue. What can be put to the requirements=... in the buildozer.spec? Is it necessary to put sdl2 and python2 so that the app works fine on the phone? Should it better be built using android_new or android? I have the main.py code that depends on kivy modules and some of its widgets, and also numpy, and some built-in Python2 modules. The app works fine in Windows using Python2 (and also Python3), the app uses three .py files for storing functions and objects. When I deploy the app using buildozer to the phone, the app does not crash..but the touch for button does not work and the Image widget does not show. This is built using buildozer android debug. Thanks.
List of buildozer requirements to build an apk for Kivy-Python app
1.2
0
0
1,585
46,663,950
2017-10-10T10:02:00.000
3
0
1
1
python,anaconda,ubuntu-16.04
46,665,023
2
false
0
0
This is not just a Ubuntu issue but also a linux world wide issue. The system python is at the core of apt-get and yum package managers. Also the modern grub is based on python so removing it can make your machine unbootable. In short, this will affect RHEL related distributions (CentOS/Fedora) and Debian related distributions (Debian/Ubuntu).
1
1
0
I have just spent 2 days trying to build Tensorflow from source, and finally succeeded when I realized that sudo pip (even with the -H flag) was not finding my anaconda pip, but instead finding a pip installed with apt. Running, then, sudo -H ~/anaconda3/bin/pip ... fixed my problem. In order to avoid this kind of issue ever again (I had several issues in this process with the "wrong" python being used), is it possible for me to completely remove python from my system, keeping only Anaconda? Is it advisable?
Ubuntu completely remove python that is not Anaconda
0.291313
0
0
417
46,665,924
2017-10-10T11:44:00.000
0
0
0
0
python,bloomberg
46,687,728
3
false
0
0
Intraday Tick fields are limited to the following fields: TRADE BID ASK BID_BEST ASK_BEST BID_YIELD ASK_YIELD MID_PRICE AT_TRADE BEST_BID BEST_ASK SETTLE You can optionally include the following fields: Action Codes BicMic Codes Broker Codes Client Specific Fields Condition Codes Eq Ref Price Exchange Codes Indicator Codes Non Plottable Events Rps Codes Spread Price Trade Id Trade Time Upfront Price Yield As for IN_AUCTION, AUCTION_TYPE and TRADE_STATUS, you can pull them using a ReferenceDataRequest, or subscribe to IN_AUCTION_RT, RT_EXCH_TRADE_STATUS, respectively.
3
0
0
I am using blpapi 3.5.5. windows python api. I am getting intraday tick data using //blp/refdata, following fields: BEST_BID, BEST_ASK and TRADE. Using Bloomberg terminal I found fields: IN_AUCTION, AUCTION_TYPE and TRADE_STATUS, but none of it works, returning NotFoundException. Dou you know any field that is containing stock info (e.g. in auction/continiuos trading) available in //blp/refdata?
Bloomberg API /blp/refdata: stockinfo
0
0
1
1,211
46,665,924
2017-10-10T11:44:00.000
0
0
0
0
python,bloomberg
46,666,854
3
false
0
0
Those fields are not available for all securities. For example IN_AUCTION returns a value for VOD LN Equity but not for IBM US Equity. HELP HELP may be able to explain why. So you need to add some logic and check for the exception.
3
0
0
I am using blpapi 3.5.5. windows python api. I am getting intraday tick data using //blp/refdata, following fields: BEST_BID, BEST_ASK and TRADE. Using Bloomberg terminal I found fields: IN_AUCTION, AUCTION_TYPE and TRADE_STATUS, but none of it works, returning NotFoundException. Dou you know any field that is containing stock info (e.g. in auction/continiuos trading) available in //blp/refdata?
Bloomberg API /blp/refdata: stockinfo
0
0
1
1,211
46,665,924
2017-10-10T11:44:00.000
0
0
0
0
python,bloomberg
46,724,437
3
true
0
0
After communicating with support, we finally found the answer. When sending request, 'conditionCodes' need to be set as True, then depending on stock exchange codes mainly for auction will be send, such as OA as opening auction, IA intraday auction, etc. Meaning some of codes can be found in the terminal using QR <GO>
3
0
0
I am using blpapi 3.5.5. windows python api. I am getting intraday tick data using //blp/refdata, following fields: BEST_BID, BEST_ASK and TRADE. Using Bloomberg terminal I found fields: IN_AUCTION, AUCTION_TYPE and TRADE_STATUS, but none of it works, returning NotFoundException. Dou you know any field that is containing stock info (e.g. in auction/continiuos trading) available in //blp/refdata?
Bloomberg API /blp/refdata: stockinfo
1.2
0
1
1,211
46,666,941
2017-10-10T12:34:00.000
3
0
0
1
python,unix
46,667,023
3
true
0
0
If you are not root then you cannot access foo. Therefore you can't check if foo/bar exists and it returns False because it cannot find a directory with that name (as it cannot access the parent directory).
2
7
0
Lets say I have directories like: foo/bar/ bar is chmod 777 and foo is 000. When I call os.path.isdir('foo/bar') it returns just False, without any Permission Denied Exception or anything, why is it like that? Shouldn't it return True?
os.path.isdir() returns false on unaccessible, but existing directory
1.2
0
0
1,262
46,666,941
2017-10-10T12:34:00.000
2
0
0
1
python,unix
46,666,984
3
false
0
0
os.path.isdir can return True or False, but cannot raise an exception. So if the directory cannot be accessed (because parent directory doesn't have traversing rights), it returns False. If you want an exception, try using os.chdir or os.listdir that are designed to raise exceptions.
2
7
0
Lets say I have directories like: foo/bar/ bar is chmod 777 and foo is 000. When I call os.path.isdir('foo/bar') it returns just False, without any Permission Denied Exception or anything, why is it like that? Shouldn't it return True?
os.path.isdir() returns false on unaccessible, but existing directory
0.132549
0
0
1,262
46,667,754
2017-10-10T13:16:00.000
0
1
0
0
python,bots,facebook-messenger,messenger,facebook-messenger-bot
46,671,593
1
false
1
0
The account linking feature does not support this type of token validation. You would need to send a request to your auth server to check if the person is still logged in.
1
0
0
I was able to get the callback with the redirect_uri and auth code and I was able to authorize the user and redirect him but I am not getting the account_linking in the request object after successful login ie.., I want to check whether the user is logged in or not for every message he sends.
Facebook messenger account linking
0
0
1
125
46,669,453
2017-10-10T14:36:00.000
1
0
0
1
python,linux,windows,remote-access,remote-server
46,670,748
1
false
0
0
For security reasons, most operating systems do not advertise information over the network. While tools such as nmap can deduce the OS running on a remote system by scanning ports over the network the only way to reliably know the OS is to login to the system. In many cases the OS will be reported as part of the login process so establishing a connection over the network will suffice to determine the OS. Running "uname -a" on the remote system will also retrieve the OS type on linux systems. This will retrieve the welcome string from HOST which usually includes the OS type. Substitute a valid user name for UNAME and host name for HOST. #!/usr/bin/env python3 import sys import subprocess CMD="uname -a" conn = subprocess.Popen(["ssh", "UNAME@HOST", CMD], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) res = conn.stdout.readlines() print(res)
1
2
0
How can I identify a remote host OS (Unix/Windows) using python ? One solution I found is to check whether the port22 is open but came to know that some Windows hosts also having Port 22 open but connections refused. Please let me know the efficient way to do the same. Thanks in advance.
Efficient way of finding remote host's operating system using Python
0.197375
0
1
3,377
46,671,386
2017-10-10T16:16:00.000
1
0
1
1
python,macos,python-2.7,python-3.x
57,103,379
2
false
0
0
None of the above worked for me with Python 3.7 installed on OS X Mojave. But simply changing the Interpreter to "python3" in the Python Launcher preferences solved the problem.
1
2
0
This problem is that when I run my python programs through python launcher python launcher tries to run it in python 2.7, causing the print command(?) to have brackets around them and numerous other broken things. I downloaded Python Launcher with python 3.6 from the python.org website. When opening Python Launcher > Preferences, the 'interpreter' drop-down field has the following options: /usr/local/bin/pythonw /usr/local/bin/python /usr/bin/pythonw /usr/bin/python /sw/bin/pythonw I don't know what the difference between python or pythonw is, or even what any of them mean, but no matter which one I select it always tries to run in python 2.7. What makes it even more baffling to me is when choosing to open my script in IDLE it says right at the top: (python 3.6.3) and runs a window called 'Python 3.6.3 shell' How can I get the program to run using python 3.6.3 through Python Launcher?
Python Launcher preferences in mac OSX not allowing selection of python 3.6 interpreter
0.099668
0
0
7,380
46,675,477
2017-10-10T20:31:00.000
3
0
1
0
python,python-3.x
46,675,840
3
false
0
0
These are parts of the syntax: Square [] brackets are used for: defining lists: list = [ 1, 2, 3] array indexing: ages[3] = 29 and more Round () brackets are used for: defining tuples: retval = ( x, y, z ) operator precedence: result = (x + y) * z class/function definitions and invocations: def func(x, y) or func(3,7) and more
1
6
0
I'm a non programmer who just started learning python (version 3) and am getting a little confused about when a square bracket is needed in my code vs round bracket. Is there a general rule of thumb?
In python, when to use a square or round brackets?
0.197375
0
0
31,684
46,675,769
2017-10-10T20:49:00.000
0
1
0
0
python-2.7,selenium
46,677,415
1
false
0
0
I don't see why you couldn't. You can install pip with apt install python-pip, you'll probably need to sudo that command unless you login as root. Then you can just open a terminal and use the pip install command to get selenium. If that doesn't work you can try running python -m pip install instead.
1
0
0
i have had a rough time getting my scripts to work on my raspberry pi zero w and the last program i need installed requires selenium. This script was designed for windows 10 + python 2.7 because i make my scripts in this environment. I was wondering if it is possible to use selenium on a raspberry pi zero w and preferably headless if possible. I can't find any info, help or guidelines online anywhere and have no idea how to use pip in raspbian (if it even has pip).
Selenium (Maybe headless) on raspberry pi zero w
0
0
1
757
46,676,738
2017-10-10T22:02:00.000
0
0
0
0
java,android,python,mobile,tensorflow
46,680,237
2
false
0
1
Short answer is : yes. You will be safe with python since it’s the main front end language for tensorflow. Also I agree with BHawk’s answer above.
2
0
1
I've been learning a lot about the uses of Machine Learning and Google's Tensorflow. Mostly, developers use Python when developing with Tensorflow. I do realize that other languages can be used with Tensorflow as well, i.e. Java and C++. I see that Google s about to launch Tensorflow Lite that is supposed to be a game changer for mobile devices. My question; can I be safe by learning Tensorflow using Python and still be able to develop mobile apps using this service?
Using Tensorflow on smartphones
0
0
0
249
46,676,738
2017-10-10T22:02:00.000
0
0
0
0
java,android,python,mobile,tensorflow
46,817,475
2
true
0
1
In short, yes. It would be safe to learn implementing TensorFlow using python and still comfortably develop machine learning enabled mobile apps. Let me elaborate. Even with TensorFlow Lite, training the data can only happen on the server side; only the prediction, or the inference happens on the mobile device. So typically, you would create your models on TensorFlow, often using python, and then leverage TensorFlow Lite to package that model into your app.
2
0
1
I've been learning a lot about the uses of Machine Learning and Google's Tensorflow. Mostly, developers use Python when developing with Tensorflow. I do realize that other languages can be used with Tensorflow as well, i.e. Java and C++. I see that Google s about to launch Tensorflow Lite that is supposed to be a game changer for mobile devices. My question; can I be safe by learning Tensorflow using Python and still be able to develop mobile apps using this service?
Using Tensorflow on smartphones
1.2
0
0
249
46,678,454
2017-10-11T01:40:00.000
11
0
1
1
python,docker,containers,anaconda,conda
46,678,633
1
true
0
0
Docker does not replace anything. It is simply one way to do things. No, you should not have all of your dependencies right in your Dockerfile. I, for one, will be running pip install from a virtualenv without ever touching Docker/*conda unless I have a good reason. Your lack of requirements.txt is not a good reason :) Conda came out in 2012 - well before Docker. Since Python has such a strong following in the non-programmer community, I rarely expect intelligible code, much less some type of DevOps ability. Conda was the perfect solution for this group. With Docker, you can have a functional Docker environment with FROM python:xx, COPY . /workdir, and RUN pip install -r requirements.txt (supposing you're using that file *ahem), but your developers will probably need a volume so they can work (so they need to know --volume. Also, if you're running Django you'll need ports configured (now they need --port and you need EXPOSE). Oh, also Django might need a database. Now you need another container and you're writing a docker-compose file. But consider the following, from almost all of my professional (DevOps) experience IF you just include requirements.txt- I can use that file in my Docker container The requirements are all in one place I can develop on my local with a venv if I want Travis can install from requirements.txt and test on multiple versions without using Tox Setuptools handles that automatically, so my thing works with pip I can reuse those Dockerfiles (or parts) with ECS, Kubernetes, etc I can deploy to EC2 without using Docker I can install the package locally via pip HTH - don't get too locked in to one piece of technology!
1
18
0
I have seen many examples of dockerfiles with conda commands in them. And there are pre-build anaconda and miniconda containers. I must be missing something. Doesn't docker REPLACE virtualenv and conda? Shouldn't I have all of my dependencies right in my dockerfile? I don't understand what I gain from adding anaconda here. In fact it seems like it makes my container unnecessarily bigger if I have to pull a miniconda container if Im not using all of miniconda's included modules.
what is the purpose of conda inside a container?
1.2
0
0
7,393
46,678,503
2017-10-11T01:49:00.000
4
1
1
0
python,debugging,pdb
46,681,066
1
true
0
0
python -m pdb foo.py will pop you into the debugger at the very beginning of the program. This is likely to be useful in very small programs which you want to analyse as a whole. In larger and more complex programs, where the situation you want to investigate arises after significant computiation at the top of a tall function call stack, this sort of usage is very impractical. In such a case it is usually easier to set a hard breakpoint with import pdb; pdb.set_trace() at the point in your source code where the interesting situation arises. Then you launch the program normally, it executes normally, perhaps taking a significant time to perform many computations without your intervention, until it reaches the point you care about. Only when you reach the point of interest does the debugger ask you to intervene. As for performance: In the first case, you will have to step through each and every statement in order advance; in the second, the debugger is not invoked until you reach the point of interest. In the firt case, the CPU spends nearly all of its time waiting for the human to respond; in the second it spends most of its time on executing the program, until the point of interest is reached.
1
3
0
What are the differences between interactive debugging (python -m pdb foo.py) and hard-coded breakpoint (import pdb; pdb.set_trace()). Most tutorials on dubuggers only focuse on the use of specific commands, but it would be interesting to understand: What is the best practice in choosing debugging modes? Do they have different performance in terms of computational time?
Python debugger: interactive debugging vs. hard-coded breakpoint
1.2
0
0
208
46,679,003
2017-10-11T02:49:00.000
1
0
1
0
python,heroku,dynamic,dyno
46,679,038
1
true
0
0
Heroku does not offer a persistent file system. You will need to store them in another service (like S3), or depending on what the contents of your files are, redesign to write and read from a database instead.
1
0
0
I have to python files that create and read text from a .txt file, in order for them to work they need to know the info inside of the .txt file. In heroku I have a scheduler that runs one file, then the other. The big problem is that the files are reset every time to their state from the original repo. How can I get around this?
How to get around Heroku resetting files?
1.2
0
0
102
46,680,795
2017-10-11T05:56:00.000
1
0
0
0
python,python-2.7
46,681,839
1
false
0
0
So in a Poisson distribution lambda is the mean and variance at the same time. And if you draw infinitely often you will see it is true. What you are asking for is like expecting to roll a dice 10 times and get an average of 3.5 since thats the expected mean. Nevertheless you could generate a list with numpy.random.poisson, check if the mean is what you want or draw another 1000 times and check again.
1
1
1
I want to generate a size 1000 random number list according to Poisson distribution and with a fixed mean. Since the size is fixed to 1000, so the sum is also fixed. The first idea I get is to use numpy.random.Poisson(lambda,size), but it can not set a fixed mean for the list. So I am really confused.
How to generate a size 1000 random number list according to Poisson distribution and with a fixed mean(size)?
0.197375
0
0
561
46,681,209
2017-10-11T06:28:00.000
0
0
0
0
python-3.x,nlp,stanford-nlp
46,719,538
2
false
0
0
I would suggest that you should read an introductory book on NLP to be familiar with the chain processes you are trying to achieve. You are trying to do question-answering , aren't you? If it is the case, you should read about question-answering systems. The above sentence has to be morphologically analyzed (so read about morphological analyzers), syntactically parsed (so read about syntactic parsing) and semantically understood (so read about anaphora resolution and , in linguistics, theta theory). Ravi is called agent and Ragu is called patient or experiencer. Only then, you can proceed to pursue your objectives. I hope this helps you!
1
0
1
My text contains text="Ravi beated Ragu" My Question will be "Who beated Ragu?" The Answer Should come "Ravi" Using NLP How to do this by natural language processing. Kindly guide me to proceed with this by syntactic,semantic and progmatic analysis using python
Natural Language Processing(syntatctic,semantic,progmatic) Analysis
0
0
0
78
46,683,129
2017-10-11T08:14:00.000
7
1
1
0
python-2.7,apscheduler
46,683,202
1
false
0
0
With interval, you can specify that the job should run say every 15 minutes. A fixed amount of time between each run and that's it. With cron, you can tell it to run on every second tuesday at 9am, or every day at noon, or on every 1st of January at 7pm. In cron, you define the minute, hour, day of month, month, day of week (eg. Monday) and year where it should run, and you can assign periodicity to any of those (ie. every Monday, or every fifth minute). Anything you can achieve with interval can also be achieved with cron I think, but not the other way around.
1
4
0
I am using APScheduler for my project. I went through APScheduler documentation. But I am not able to understand what is actual difference between 'Interval' and 'cron' triggers. Following definition was given in docs: interval: use when you want to run the job at fixed intervals of time cron: use when you want to run the job periodically at certain time(s) of day
What is the difference between 'Interval' and 'Cron' triggers in APScheduler?
1
0
0
1,834
46,683,335
2017-10-11T08:25:00.000
0
0
1
0
python,python-requests
46,684,566
1
false
0
0
It was an issue with the python versions. My server's default was 2.6 and requests was 2.7 - tried running python2.7 and then it worked. Closing this.
1
1
0
i have a python script which uses requests module. I had installed requests on my machine and the script runs fine. Now I wanted to run this script on server so that it's always available(otherwise it requires my local machine to be running that script all the time, for it to function) I installed requests (via pip install requests) and when I do pip freeze, it does show requests as one of the installed modules. but when I run the script, I get an error import requests ImportError: No module named requests it is unable to find requests even when I try importing it in python shell on server, gives the same error - No module named requests. How do I get this going - TIA EDIT: It was an issue with the version difference - I was trying on python2.6(which happened to be the default on the server) and requests module was for 2.7. After trying to run the script using python2.7, it worked just fine.
unable to import requests in python on server
0
0
1
752
46,683,566
2017-10-11T08:37:00.000
2
0
1
0
python,pycharm
46,683,871
2
true
0
0
You could also run the part of your code you want to test/check in the console by selecting it and then right clicking and clicking on "Execute Selection in Console Alt-Shift-E". That's what I use sometimes when the debugger is not helpful. After running the code (you can also just "run" functions or classes) the console knows the functions and you can use the same features that Spyder has. However, be aware that when you change the code you need to run it in the console once to update the console definitions!
2
0
0
I am writing Python scripts in Pycharm with IPython installed. So I can use Python Console in Pycharm to type Python commands and check the immediate output of the codes. However, when I run a script file after pressing 'Run' button (Shift+F10), all the variables and functions are not visible to the Python Console. This is, however, the feature of Spyder, another popular Python IDE. So here is my question: how can I configure Pycharm so that running a Python script file is visible for Python Console? Thanks
how can I configure PyCharm so that running a Python script file is visible for Python Console
1.2
0
0
71
46,683,566
2017-10-11T08:37:00.000
0
0
1
0
python,pycharm
46,683,687
2
false
0
0
You can not. But you can use pdb (which will break code execution where you need it and you will be able to do the same things, as in the Python Console). And, which is better and more powerful, you can use PyCharm's debugger. It represents all available variables in tree-like structures and is really handy.
2
0
0
I am writing Python scripts in Pycharm with IPython installed. So I can use Python Console in Pycharm to type Python commands and check the immediate output of the codes. However, when I run a script file after pressing 'Run' button (Shift+F10), all the variables and functions are not visible to the Python Console. This is, however, the feature of Spyder, another popular Python IDE. So here is my question: how can I configure Pycharm so that running a Python script file is visible for Python Console? Thanks
how can I configure PyCharm so that running a Python script file is visible for Python Console
0
0
0
71
46,685,936
2017-10-11T10:30:00.000
0
1
0
0
python,python-telegram-bot
46,703,372
1
false
1
0
Used : bot.setGameScore (user_id = 5432131, score=76,inline_message_id=uygrtfghfxGKJB) I received these two(user_id,inline_message_id) parameters from update.callback_query
1
0
0
I'm using BOT API for telegram, through setGameScore i tried to set game score of user with _user_id_ and score but its not working ... used bot.setGameScore (user_id = 56443156,score=65) Iam not using game to set only for inlinequery i received a caused error : "Message to set game score not found"
How to set game score using setGameScore in python telegram bot
0
0
1
489
46,689,334
2017-10-11T13:21:00.000
3
0
1
0
python,colors,visual-studio-code,themes
63,959,173
3
false
0
0
Leonard's answer above is perfect. Just for the googlers in 2020, I would add that the command in Command Palette: Developer: Inspect TM Scopes seems to have changed to: Developer: Inspect Editor Tokens and Scopes. This option shows both the standard token types, as well as TextMate scopes.
1
14
0
Could some one explain to me please how to customize docstring color for Python in VSCode's default theme? I want to do it thru User Settings because want to be able to save my config file. I tried to use "editor.tokenColorCustomizations": {} but it affects all strings.
How to customize docstring color for Python in VSCode's default theme?
0.197375
0
0
4,945
46,690,417
2017-10-11T14:09:00.000
0
0
1
1
python,pip,windows-10
46,693,783
2
true
0
0
Solved : the problem is that python wasn't install in the right place (C:\python) but just for one user. I uninstall and re-install python using "custom" configuration.
1
0
0
I'm trying to install pip for python 3.6 on windows 10. I run get-pip.py. when I try to use pip on terminal I get an error message Pip command is not recognized. I already add C:\Python36\Scripts to the environmental variable. Is there anything I missed ?
unable to use pip even after add to environmental variable
1.2
0
0
131
46,691,638
2017-10-11T15:02:00.000
1
0
1
0
python,ide,pycharm,anaconda
63,529,140
5
false
0
0
You can manually update the package and change the interpreter settings by following either of the two ways:- Method-1: you have to go to the edit configuration button situated at the topmost right screen of your pycharm window and after clicking on it select the interpreter you want from the list. Method-2: Go to system settings (in Windows) or system preferences (in mac) and then select a project interpreter from the dropdown menu. From there, you can change interpreter settings as well as install packages from the '+' sign.
1
29
0
It seems that PyCharm always updates the connected Python interpreter on startup and also scans and updates all packages if needed. For me this means whenever I open PyCharm there will be updating processes running in background and I have to wait sometimes for as good as a whole minute, which I find quite annoying. So the question is: does there exist any way to disable this automatic update mechanism? It would be best if I can manually update Python interpreter and the packages only if I want to.
How to disable PyCharm from automatically updating Python interpreter on startup
0.039979
0
0
5,257
46,692,833
2017-10-11T16:03:00.000
0
0
1
0
python,newline
46,697,185
1
false
0
0
JSON switches do some formatting. E.g., =indent 4 will add newlines and indent the right amount for each level of sublist. Also, lists (or dictionaries) that are to be, eventually, opened with JSON can first be treated as strings and reformatted as strings. They still are opened as lists (or dictionaries) by using json.load(f).
1
0
0
I know that more hours of research will find an answer to this, but I have limited time. I want the data files that I create in Python to show up in editors with real blank lines (0A or 0D0A) rather than "\n". I want this because I need to human-read their contents, but they are long files with a lot of CSV data and a lot of HTML tags, so they appear as an unintelligible mess when I look at them in Notepad++. I can't seem to get s.replace('\\n', '\n') to do the job. I've fooled around with os.linesep and other things, but my head is whirling. Can someone help?
Simplest way to get a real linefeed saved in a text file
0
0
0
29
46,693,617
2017-10-11T16:47:00.000
0
0
1
0
python,visual-studio,visual-studio-2017
46,779,460
2
false
0
0
I have had the same thing happen to me. Though this was actually with a C# application. I noticed Visual Studios stopped detecting the errors after adding a few NuGet references and doing some manual modifications of the project files. I noticed that this was only happening on this one project. All the other projects I worked on did not seem to have the same issue. I was able to get it to start working again by creating a new project, moving my code over, and adding the references back one by one. For this situation, it looked as if a corrupt project file or bad reference was to blame even though the project would compile and run correctly. Does this issue exist in other programming languages/projects?
1
1
0
I just installed the Python development workload for VS 2017 but the editor shows none of the red squiggly underlining for syntax errors that I'm used to seeing with C# on VS, nor any entries in the error list. If I try to run the code with errors, it warns me there are errors in the code but does not specify what they until exceptions are thrown from running. I've tried reinstalling the workload and looked through every available option under the Tools/Options tab but can find nothing about syntax errors. Any fixes detailed for earlier versions of VS no longer seem to apply, what am I missing?
Visual Studio 2017 not recognising errors in code
0
0
0
480
46,695,289
2017-10-11T18:35:00.000
0
0
1
1
python,vim,macos-high-sierra
49,448,621
1
false
0
0
I had vim crashing after a brew upgrade which also upgraded the python version. Reinstallation did not help, but reinstalling all the plugins (and therefore updating them) did help. Especially rebuilding YouCompleteMe was key.
1
2
0
After upgrading to macOS high sierra, vim began to crash with the plugins need python. I am getting the below error whenever i activated a python plugin. For example, i use tern for vim for javascript files. When i activate this plugin, vim opens successfully but it crashes when i open a javascript file. I have reinstall vim and python with brew. It did not work. I have also build vim from source, it did not work either. Vim: Caught deadly signal SEGV
vim with python plugins crashes on macOS high sierra
0
0
0
316
46,696,267
2017-10-11T19:41:00.000
4
1
0
0
python,pycharm,remote-debugging
46,715,433
1
true
0
0
Solve the problem. There are two places to edit the same remote interpreter. One is from Default Setting-> Project Interpreter -> Setting Icon -> More -> edit icon, another is from Tools -> Deployment -> Configuration. The settings in both places need to be correct for the same remote interpreter. For some reason, the password in my first location was cleared.
1
1
0
I have been using remote interpreter all the times before, but suddenly it shows failed error message: can't run python interpreter: error connecting to remote host: I am using SFTP, and I have tried "Test SFTP connection", got success message with the same host. I am wondering how do I see verbose message in the remote debugging connection. I am using PyCharm 2017.2 professional.
Pycharm stopped working in remote interpreter
1.2
0
0
1,301
46,696,478
2017-10-11T19:55:00.000
0
0
1
0
python,machine-learning,naivebayes
46,697,151
1
false
0
0
Three are several ways to do that: simply concatenate Hashing Vectors with integers and train on that bigger features vector. It will work. It would be more reasonable to do so using different classifier, beacuse MultinominalNB can't model the interactions between the features. But if you want nothing else but MultinominalNB, you can do it. You can also: train two of them - one on HV, one on integers, and weigth the output, or use MultinominalNB on text and different classifier on integers, or use MultinominalNB on text, and take the output as a features both with integers.
1
0
1
The data consists of text parameters as well as integer parameters. The problem is to train machine with both data. Hashing Vectorizer is used for text parameters training. Thanks in advance....
How to use multinomial naive bayes for both text and non text data using python?
0
0
0
207
46,698,519
2017-10-11T22:27:00.000
0
0
0
0
python,optimization,scipy
46,698,554
1
false
0
0
The minimize function takes an options dict as a keyword argument. Accepted keys for this dict inlude, disp, which should be set to True to print the progress of the minimization.
1
0
1
I am using the minimize function from scipy.optimize library. Is there a way to print some values during the optimization procedure? Values like the current x, objective function value, number of iterations and number of gradient evaluations. I know there are options to save these values and return them after the optimization is over. But can I see them at each step?
scipy optimize - View steps during procedure
0
0
0
311
46,698,570
2017-10-11T22:33:00.000
0
0
0
0
python,image-processing
46,699,385
1
false
0
0
Actually I think I have figured it out, pretty simple maths actually, here is what i am going to do Take every point and take away the first box point values - this will give me the points as if the box starts at [ 0,0 ] Apply the box/normalised size ratio to every point
1
0
1
I am currently learning python and playing around with tensorflow. I have a bunch of images where I have obtained the landmarks (pixel points) of a person's facial features such as ears and eyes. In addition, it also provides me with a box (4 coordinates) where the face exists. My goal is to normalise all the data from different images into a standard sized rectangle / square and calculate the position of the landmarks relative to the normalised size. Is there an API that allows me to do this already or should I get cracking and calculate the points myself? Thanks in advance.
Normalise face landmark data using python
0
0
0
339
46,701,063
2017-10-12T03:44:00.000
0
0
1
0
python,switch-statement
65,283,677
6
false
0
0
I remembered in ancient time, an inexperienced Larry Walls said that Perl doesn't need case switch construct because it can be done the same way with: "if - elif - elif .... else". Back then Perl was nothing more but a mere scripting tool for hacker kiddies. Of course, today's Perl has a switch construct. It's not unexpected that over some decades later, the new generation kids with their new toys are doomed to repeat the same dumb statement. It's all about maturity, boys. It will eventually have a case construct. And when python has matured enough as a programming language, like FORTRAN/Pascal and C and all languages derived from them, it will even have a "goto" statement :) BTW. Usually, case switch translated to asm as indirect jump to list of address of respective cases. It's an unconditional jump, means far more efficient than comparing it first (avoiding branch misprediction failure), even in just a couple cases it considered as more efficient. For a dozen or more (up to hundreds in code snippet for a device driver) the advantage of the construct is unquestionable. I guess Larry Walls didn't talk assembly back then.
1
62
0
Please explain why Python does not have the switch-case feature implemented in it.
Why doesn't Python have switch-case?
0
0
0
42,643
46,701,216
2017-10-12T04:03:00.000
2
0
0
0
python,tensorflow,machine-learning,keras,neural-network
46,704,606
1
true
0
0
No, because it is a generator the model does not know the total number of training samples. Therefore, it finishes an epoch when it reaches the final step defined with the steps_per_epoch argument. In your case it will indeed train 192 samples per epoch. If you want to use all samples in your model you can shuffle the data at the start of every epoch with the argument shuffle.
1
1
1
I am using model.fit_generator() to train a neural network with Keras. During the fitting process I've set the steps_per_epoch to 16 (len(training samples)/batch_size). If the mini batch size is set to 12, and the total number of training samples is 195, does it mean that 3 samples won't be used in the training phase?
Are all train samples used in fit_generator in Keras?
1.2
0
0
198
46,701,431
2017-10-12T04:29:00.000
0
0
1
0
python,arrays
46,701,588
1
false
0
0
Length of array: len(array). Try to do 2 cycles to spread all values to 2-d array.
1
0
1
I am trying to find the length of a 1-D float array and convert it into a 2-d array in python. Also when I am trying to print the elements of the float array the following error is coming:- 'float' object is not iterable
How to find the number of elements of a float array in python and how to convert it to a 2-dimensional float array?
0
0
0
608
46,702,163
2017-10-12T05:38:00.000
1
1
0
0
python,pip,pypi,python-wheel,twine
46,708,728
1
true
0
0
I don't think it's possible. Setup username and password at PyPI and use them in your .pypirc.
1
1
0
I can see that we can create account on PyPI using OpenID as well. Can we also upload python packages to PyPI server using OpenID? Something like generic upload procedure by creating .pypirc file and using PyPI username and password.
Is it possible to upload python package on PyPI using OpenID?
1.2
0
1
111
46,702,300
2017-10-12T05:51:00.000
1
0
1
0
python-3.x
46,702,365
2
false
0
0
Because computers do the calculation in binary. But they also have limited bits to represent the numbers. there was probably an overflow in the base-2 so the machine had to round up by 1 bit or something. Then when it translated it to decimal, u get that .00000000000001
1
1
0
I used pythonista to divide 12.76 and the result is 0.12000000001 instead of 0.12, why?
Why in python 12.76/106 is 0.1200000000001 and not 0.12?
0.099668
0
0
190
46,708,236
2017-10-12T11:16:00.000
0
1
0
0
c#,python,json,rest,web-services
46,719,477
1
false
0
0
It is clearly difficult to provide an answer with little details. TL;DR is that it depends on what game you are developing. However, polling is very inefficient for at least three reasons: The former, as you have already pointed out, it is inefficient because you generate additional workload when there is no need The latter, because it requires TCP - server-generated updates can be sent using UDP instead, with some pros and cons (like potential loss of packets due to lack of ACK) You may get the updates too late, particularly in the case of multiplayer games. Imagine that the last update happened right after the previous poll, and your poll is each 5 seconds. The status could be already stale. The long and the short of it is that if you are developing a turn-based game, poll could be alright. If you are developing (as the use of Unity3D would suggest) a real-time game, then server-generated updates, ideally using UDP, are in my opinion the way to go. Hope that helps and good luck with your project.
1
0
0
I have a game where I have to get data from the server (through REST WebService with JSON) but the problem is I don't know when the data will be available on the server. So, I decided to use such a method that hit Server after specific time or on request on every frame of the game. But certainly this is not the right, scale able and efficient approach. Obviously, hammering is not the right choice. Now my question is that how do I know that data has arrived at server so that I can use this data to run my game. Or how should I direct the back-end team to design the server in an efficient way that it responds efficiently. Remember at server side I have Python while client side is C# with unity game-engine.
Check the data has updated at server without requesting every frame of the game
0
0
1
46
46,709,569
2017-10-12T12:25:00.000
0
1
0
0
python,hyperlink,web-scraping,percentage
46,709,727
3
false
0
0
%20 is the URL encoding for space (0x20 being space's ASCII code). Just replace all those %20 by spaces and everything will likely work.
1
0
0
I am trying to web scrape some Tweets from this url using Python 3.5 url = "https://twitter.com/search?l=en&q=ecb%20draghi%20since%3A2012-09-01%20until%3A2012-09-02&src=typd" My problem is that %20d %20s %20u are already encoded in Python 3.5, so my code does not run on this url. Is there a way to solve this issue? Thanks in advance, Best
%20d %20s %20u in link Python 3.5
0
0
1
622
46,709,686
2017-10-12T12:32:00.000
1
0
0
0
python,hex,byte
46,709,811
2
true
0
0
bytearray(b'\x100') is correct, you just interpret it wrong way. It is character \x10 followed by character 0 (which happens to be ASCII for \x30).
2
1
0
I want to convert a hexadecimal string like 1030 to a byte array like b'\x10\x30' I know we can use bytearray.fromhex("1030") or "1030".decode("hex"). However, I get output '\x100'. What am I missing here?
convert hexadecimal sting to byte array
1.2
0
0
1,209
46,709,686
2017-10-12T12:32:00.000
0
0
0
0
python,hex,byte
46,709,749
2
false
0
0
There is a built-in function in bytearray that does what you intend. bytearray.fromhex("de ad be ef 00") It returns a bytearray and it reads hex strings with or without space separator.
2
1
0
I want to convert a hexadecimal string like 1030 to a byte array like b'\x10\x30' I know we can use bytearray.fromhex("1030") or "1030".decode("hex"). However, I get output '\x100'. What am I missing here?
convert hexadecimal sting to byte array
0
0
0
1,209
46,711,553
2017-10-12T13:58:00.000
2
0
0
1
python-2.7,command-line
46,711,822
1
true
0
0
Try using the the \\ after the mapped drive letter. I have a shared drive mapped locally on my windows machine under Z. When I run my local python interpreter and give the above shared path, it works as expected. For your example, it should be: C:\path\to\python\python.exe s:\\file_path\python_script.py My example C:\Users\david.mcmahon>python z:\\Test\hooks\my_app.py Running from shared drive.. Hope this helps.
1
0
0
I am working on a Windows machine with admin rights and Python 2.7. I would like to use my locally downloaded python to call a script on the shared drive from the command line. Unfortunately, this is not working C:\python27\python.exe net use S:file_path\python_script.py What is the right way to call a shared python script but run it with a local copy of python?
Use local python to run python script from shared drive
1.2
0
0
5,274
46,712,780
2017-10-12T14:55:00.000
0
0
0
1
python,outlook,win32com,outlook-redemption
46,713,942
1
false
0
0
Scheduler runs as a service, and office apps (Outlook included) should nto be used in a service.
1
0
0
I have a Python Script, which uses win32com.client.dispatch and redemption in order to connect to an instance of Outlook and harvest some data from a public folder. When I execute this script on command line it works just fine. Adding it as a scheduled task, it appears to get hung at the line Outlook = win32com I added Event Log statements along the way to see where it is getting hung, other than that I don't have much in the way of error logs (since it doesnt actually fail) Is there any sort of security settings I should be concerned about or anything I am not thinking of? Everything works fine with a standard python call in the CMD.
Python - Win32Com Client Dispatch Hanging as Scheduled Tasks
0
0
0
596
46,713,759
2017-10-12T15:42:00.000
0
0
1
0
python,django,nginx,server
46,714,888
2
false
1
0
@Mounir's answer is pretty solid- but I wanted to tag on another piece of advice- using playbooks from Ansible Galaxy is also another option. Existing playbooks already exist for lots of usecases (including Django) and they take into account many of these best practices. I am not saying that all playbooks on Galaxy are good- but some are, and by virtue of being open source, they are frequently patched and updated.
1
0
0
So I'm setting up a server by myself. Now I ran into lots of different ways where to install the packages. I am thinking of the core packages like nginx, gunicorn, python3, postgresql and so on. I learned that setting up a VENV (virtual environment) is a good thing so I can have several projects running with different versions on packages. But it's a bit confusing wich ones are not going to be inside the VENV. Some install postgreSQL outside the VENV, but psycopg2 inside. Some the gunicorn inside VENV. and so on. Are there any best practices or rules that are better safe to follow? For info. I'm setting up a Ubuntu server 16.04 with Nginx, gunicorn. PostgreSQL, psycopg2, python3
Best practice for package install on django server
0
0
0
45
46,714,151
2017-10-12T16:03:00.000
0
0
0
0
javascript,jquery,python,ajax,django
46,714,365
1
true
1
0
Just make ajax calls to the view and return only the required content from the server and and on ajax success replace using .html() method.
1
0
0
I am using bootstrap with nav-tabs to hopefully select filtered images based on the tab clicked. I can do an AJAX call to the view that I created that filters out the images based on category and returns an items.html template file. Is there a way to load the partial template without having to reload the entire page? Should I just do the AJAX call and it will update that partial view?
Python Django Load Images based on Tab Clicked
1.2
0
0
252
46,714,971
2017-10-12T16:48:00.000
0
0
0
0
python,sql,sqlalchemy,amazon-redshift
46,715,732
2
false
0
0
If you don't run much else on that machine then memory should not be an issue. Give it a try. Monitor memory use during the execution. Also use "load" to see what pressure on the system is.
1
0
0
Im going to run query that returns a huge table (about 700Mb) from Redshift and save it to CSV using SQLAlchemy and python 2.7 on my local machine (mac pro). I've never done this with such a huge queries before and obviously there could be some memory and other issues. My question is what i shall take into account and how to use sql alchemy in order to make the process work? Thanks, Alex
Python/SQLAlchemy: How to save huge redshift table to CSV?
0
1
0
1,752
46,717,736
2017-10-12T19:47:00.000
1
0
1
0
python,pip
46,718,546
1
false
0
0
I am certain you searched ~/.cache/pip not ~/cache/.pip as stated in your question. As of version 6 pip now comes with an on-by-default cache functionality which respects your XDG_CACHE_HOME variable. Check it or try /tmp/pip* Hope that helps.
1
1
0
I am using pip version 7.0.3 and python 2.7. When I am installing package, it shows like using cache directory. I want to know location of cache directory. And I am using virtualevn , and pip version 7.0.3. I have searched in location ~/cache/.pip, but the package is not found from the directory . It is referring from some other directory. Please help me on this.
Where is the cache directory in pip 7.0?
0.197375
0
0
58
46,719,157
2017-10-12T21:27:00.000
0
0
0
0
python,django,django-models,django-admin
47,341,274
2
false
1
0
Adding a foreign key relationship will add this to the UI but make sure you haven’t included that field’s name in raw_id_fields
1
0
0
I have two models connected to Django Admin system. And would like to have a possibility such that by using the dropdown list, choose a specific value from the first model in a second model. What do you think? Is this possible? Thanks in advance
How to join two tables in Django Admin system
0
0
0
439
46,719,568
2017-10-12T22:02:00.000
2
0
0
0
python,python-2.7,sqlalchemy
46,736,027
1
false
0
0
literal_column is intended to be used as, well, a literal name for a column, not as a parameter (which is a value), because column names cannot be parameterized (it's part of the query itself). You should generally not be using literal_column to put a value in a query, only column names. If you are accepting user input for column names, you should whitelist what those names are. One exception is that sometimes you want to output some really complex expression not directly supported by SQLAlchemy, and literal_column basically allows you to put freeform text in a query. In these cases, you should ensure that user-supplied parts of the expression (i.e. values) are still passed via bind params.
1
0
0
On the face of it, it seems that bindparam should generally be used to eliminate SQL injection. However, in what situations would it necessitate using literal_column instead of bindparam - and what measures should be taken to prevent SQL injection?
Sqlalchemy When should literal_column or be used instead of bindparam?
0.379949
1
0
918
46,719,690
2017-10-12T22:13:00.000
0
0
0
1
python,azure-batch
46,853,245
1
false
0
0
A fix was deployed to all Azure regions on 2017-10-19 that should prevent this behavior from happening. You will need to redeploy your pool to get this fix - if you've already mounted something under $AZ_BATCH_NODE_ROOT_DIR, then it is recommended to remotely login to the node and unmount the device first prior to deleting the pool. On a side note: it is not recommended to mount any resource under a task directory. Because task directories are cleaned up when deleted, this can lead to deletion of mounted resources.
1
0
0
I have created a single Pool (size Standard_D32_v3) with a single Job. I have set the pool property max_tasks_per_node=32. I then have a list which contains 27000 objects. Since I cannot add more than 100 tasks at a time to a Job, I "Chunk" my list so that I have a list of lists, each with 100 tasks. Finally, I insert each "Chunk" of tasks. In the StartTask, I mount a File Share (Not BLOB), which contains files needed for processing. My File Share has folders: 2012, 2013, 2014, 2015, 2016, 2017. I have found that for some reason, Azure Batch is deleting all files & folders except for 2017. This the 2nd time it has happened. No where in my code do I delete from the file share or anywhere else. I do delete the Pool, Job and Tasks when finished. What the *&^% is going on? UPDATE This is still happening. When the File Share is mounted, it is done so via Bash as a command passed into the StartTask. Azure portal gives the connection information for the File Share and provides the following CHMOD configuration: dir_mode=0777,file_mode=0777' I thought that I would be clever and change CHMOD properties to 444 (read only). Unfortunately, I then get a "Permission Denied" error. I then changed to 555 (read and execute) and files were once again deleted. This is 100% an issue with Azure Batch. Microsoft does not do any logging what-so-ever (or even allow users to) of File Shares. I was hoping to see delete requests/operations and from which IP and time the request originated, but alas, it is impossible...
Azure Batch is deleting files from File Share
0
0
0
136
46,720,222
2017-10-12T23:12:00.000
2
0
1
0
python,pip,conda
46,825,476
1
true
0
0
Try using the below command on windows command prompt or PowerShell: pip install --proxy DOMAIN\username:password@proxyserver:port packagename Replace the DOMAIN, username, password, proxy server and port with values specific to your system. This works for a windows 10 installation authenticated by Active Directory that is behind a corporate proxy server.
1
2
0
In R I can use install.packages("pkgName") to install a new package no problems. But when I tried python and do pip install package it fails with error Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 11004] getaddrinfo failed',)': /simple/pyarrow/ I think it's because pip doesn't know how to automatically detect the proxy (that's gets set in Internet Explorer) like R can. Most of the info I find online either don't work or is just too complicated for someone without specialist knowledge to follow. conda install fails as well. Is there an easy fix to this?
How to use conda/pip install to install packages behind a corporate proxy?
1.2
0
0
2,851
46,721,382
2017-10-13T02:00:00.000
0
1
0
1
python,excel
46,721,674
1
false
0
0
Create an automatic task using scheduler in windows.create a bat file to run the Python script and schedule that ask from the windows scheduler.hope this helps
1
0
0
I've constructed a python script python_script.py in Linux. Is there a way to do a cron job which will be compatible with Linux and Windows. In fact, even tough I have implemented this script in Linux, it will be run under a cron job in Windows. Otherwise, assume the script works well on Linux and Windows. How could we create an automatic task on Windows (similar to a cron job on Linux)?
Cron job which will work under Linux and Windows
0
0
0
400
46,729,915
2017-10-13T12:25:00.000
1
0
0
1
python,google-app-engine,push,publish-subscribe,google-iap
59,740,146
2
false
1
0
I had a pretty similar issue - a GAE 2nd G standard application in project A, which is wired under IAP, that cannot receive the pushed pub/sub message from project B. My workaround is: Setup Cloud Function (HTTP triggered) in project A; Setup the subscription of project B Pub/Sub topic to push the message to above Cloud Function endpoint; The above Cloud Function works like a proxy to filter (needed based on my case, ymmv) and forwards the Pub/Sub message in a http request to the GAE app; Since the Cloud Function is within same project with the GAE app, there is only needed to add the IAP authentication for above http request (which fetches the token assigned from the specific SA). There should be a project A's SA setup in Project B IAM, which may have at least Pub/Sub Subscriber and Pub/Sub Viewer roles. Hope this could be an option for your case.
1
2
0
I am testing out a very basic Pub/Sub subscription. I have the push endpoint set to an App I have deployed through a Python Flex service in App Engine. The service is in a project with Identity-Aware Proxy enabled. The IAP is configured to allow through users authenticated with our domain. I do not see any of the push requests being processed by my app. I turned off the IAP protection and then I see that the requests are processed. I turn it back on and they are no longer processed. I had similar issues with IAP when trying to get a Cron service running; that issue resolved itself after I deployed a new test app in the same project. Has anyone had success with configuring a push subscription through IAP? I also experimented with putting different service accounts on the IAP access list and none of them worked.
Google Pub/Sub push subscription into IAP-protected App Engine
0.099668
0
0
890
46,730,944
2017-10-13T13:18:00.000
3
0
0
0
php,python,odbc,aspen
46,762,657
3
false
0
0
I am unaware of a method to access IP21 data directly via PHP, however, if you're happy to access data via a web service, there are both REST and a SOAP options. Both methods are extremely fast and responsive. AFW Security still applies to clients accessing the Web Services. Clients will require SQL Plus read (at lesast) access. SOAP Requires the "Aspen SQL plus Web Server/Service and Health Monitor" component to be installed on IP21 server (Selected during install of IP21). Recent versions of IP21 require a slight modification to the web.config file to allow remote access. If you cannot execute the web service remotely, try doing it locally (i.e. on the same machine as the IP21 server) and see if this is an issue. Example: http://IP21ServerHostName/SQLPlusWebService/SQLplusWebService.asmx/ExecuteSQL?command=select%20*%20from%20compquerydef; REST My preference (over SOAP), as it is super easy to access using JQuery (JavaScript) - a couple of lines of code! Unsure of exactly what IP21 component is required on install for this, but it appears to be on most of my IP21 servers already. Arguments in the URL can control the number of rows returned (handy). If using within Jquery / JavaScript, web page must be hosted on the AspenOneServerHostName machine, else you'll run into Cross-Origin Resource Sharing (CORS) issues. Example: http://AspenOneServerHostName/ProcessData/AtProcessDataREST.dll/SQL?%3CSQL%20c=%22DRIVER={AspenTech%20SQLplus};HOST=IP21ServerHostName;Port=10014;CHARINT=N;CHARFLOAT=N;CHARTIME=N;CONVERTERRORS=N%22%20m=%22DesiredMaxNumberOfRowsReturned%22%20s=%221%22%3E%3C![CDATA[select%20*%20from%20compquerydef]]%3E%3C/SQL%3E Notes: AspenOneServerHostName can be the same as IP21ServerHostName AspenOneServerHostName must have ADSA configured to view IP21ServerHostName Replace DesiredMaxNumberOfRowsReturned with a number
2
2
0
Is it possible to query data from InfoPlus 21 (IP21) AspenTech using php? I am willing to create a php application that can access tags and historical data from AspenTech Historian. Is ODBC my answer? Even thinking that is, I am not quite sure how to proceed. UPDATE: I ended up using python and pyODBC. This worked like a charm! Thank you all for supporting.
How to query data from an AspenTech IP21 Historian using PHP?
0.197375
1
0
7,310
46,730,944
2017-10-13T13:18:00.000
2
0
0
0
php,python,odbc,aspen
50,016,010
3
false
0
0
Yes ODBC driver should be applicable to meet your requirement. We have already developed an application to insert the data into IP21 historian which uses same protocol. Similarly some analytical tools (e.g. Seeq Cooperation) also uses ODBC to fetch the data from IP21 historian. Therefore it should be possible in your case as well.
2
2
0
Is it possible to query data from InfoPlus 21 (IP21) AspenTech using php? I am willing to create a php application that can access tags and historical data from AspenTech Historian. Is ODBC my answer? Even thinking that is, I am not quite sure how to proceed. UPDATE: I ended up using python and pyODBC. This worked like a charm! Thank you all for supporting.
How to query data from an AspenTech IP21 Historian using PHP?
0.132549
1
0
7,310
46,736,521
2017-10-13T19:04:00.000
0
0
0
0
arrays,python-3.x,compression
46,737,626
1
false
0
0
n-dimensional arrays can be many things, aside from being an image. one example would be a geo-spatial representation that would consolidate (roll-up_ whenever you zoom out and drill down whenever you zoom in. array resizing technique should rely on the context of such resize takes place, and hence there is no best answer. But typically resizing arrays or tensors is by consolidation whenever you reduce number of entries, and by interpolation whenever you increase them
1
1
1
What is the best way to resize a 2D array (the array has a thermal data contents values between 20 to 30) from size 173X151 to size 146X121 without losing too much information. I understand it is possible to reduce the size of images with some function(images of intensity values 0 to 255) but my understanding that these functions are for images and not other types of arrays. Is there a function for reducing the size for any type of array? something like compressing the 2D array to different sizes? Thanks
Resizing 2D arrays to a different size (e.,g. reduction or comression)
0
0
0
40
46,740,127
2017-10-14T02:03:00.000
3
0
0
1
google-app-engine,google-cloud-platform,bigtable,google-cloud-bigtable,google-cloud-python
47,776,406
1
false
1
0
Bigtable client take somewhere between 3 ms to 20 ms to complete each request, and because python is single threaded, during that period of time it will just wait until the response comes back. The best solution we found was for any writes, publish the request to Pubsub, then use Dataflow to write to Bigtable. It is significantly faster because publishing a message in Python would take way below 1 ms to complete, and because Dataflow can be set to exactly the same region as Bigtable, and it is easy to parallel, it can write much faster. Though it doesn't solve the scenario where you need frequent read or write need to be instantaneous
1
1
0
I'm running into a performance issue with Google Cloud Bigtable Python Client. I'm working on a flask API that writes to and reads from a GCP Bigtable instance. The API uses the python client to communicate with Bigtable, and was deployed to GCP App Engine flexible environment. Under low traffic, the API works fine. However during a load test, the endpoints that reads and writes to Bigtable suffers a huge performance decrease compare to a similar endpoint that doesn't communicate with Bigtable. Also, a large percentage of requests went to the endpoint receives a 502 Bad Gateway, even when health check was turned off in App Engine. I'm aware of that the client is currently in Alpha. I wonder if the performance issue is known, or if anyone also ran into the same issue Update I found a documentation from Google stating: There are issues with the network connection. Network issues can reduce throughput and cause reads and writes to take longer than usual. In particular, you'll see issues if your clients are not running in the same zone as your Cloud Bigtable cluster. In my case, my client is in a different region, by moving it to the same region had a huge increase in performance. However the performance issue still exist, and the recommendation from the documentation is to put client in the same zone as Bigtable. I also considered using Container engine or Compute Engine where it is easier to specify the zone, but I want stay with App Engine for its autoscale functionality and managed services.
Google Cloud Bigtable Python Client Performance Issue
0.53705
1
0
503
46,742,589
2017-10-14T08:41:00.000
0
0
1
0
python,unicode,utf-8
46,790,379
3
false
0
0
Python source is only plain ASCII, meaning that the actual encoding does not matter except for litteral strings, be them unicode strings or byte strings. Identifiers can use non ascii characters (IMHO it would be a very bad practice), but their meaning is normally internal to the Python interpreter, so the way it reads them is not really important Byte strings are always left unchanged. That means that normal strings in Python 2 and byte litteral strings in Python 3 are never converted. Unicode strings are always converted: if the special string coding: charset_name exists in a comment on first or second line, the original byte string is converted as it would be with decode(charset_name) if not encoding is specified, Python 2 will assume ASCII and Python 3 will assume utf8
1
0
0
Say, I have a source file encoded in utf8, when python interpreter loads that source file, will it convert file content to unicode in memory and then try to evaluate source code in unicode? If I have a string with non ASCII char in it, like astring = '中文' and the file is encoded in gbk. Running that file with python 2, I found that string actually is still in raw gbk bytes. So I dboubt, python 2 interpret does not convert source code to unicode. Beacause if so, the string content will be in unicode(I heard it is actually UTF16) Is that right? And if so, how about python 3 interpreter? Does it convert source code to unicode format? Acutally, I know how to define unicode and raw string in both Python2 and 3. I'm just curious about one detail when the interpreter loads source code. Will it convert the WHOLE raw source code (encoded bytes) to unicode at very beginning and then try to interpret unicode format source code piece by piece? Or instead, it just loads raw source piece by piece, and only decodes what it think should. For example, when it hits the statement u'中文' , OK, decode to unicode. While it hits statment b'中文', OK, no need to decode. Which way the interpreter will go?
when python interpreter loads source file, will it convert file content to unicode in memory?
0
0
0
93
46,742,671
2017-10-14T08:51:00.000
-1
0
1
0
python,python-3.x
46,742,752
5
false
0
0
You can use float. For example: a = "20" y = float(a)
1
0
0
Desired result (final values in float): 00 → 0.0 20 → 20.0 15 → 15.0 05 → 0.5 003 → 0.03 01 → 0.1 How would I be supposed to do this? The initial values are a string, but when I convert it to float the zeroes disappear. Are there any pre-made functions for this?
Comma after leading zeroes
-0.039979
0
0
54
46,742,682
2017-10-14T08:53:00.000
1
0
0
0
python,mysql
46,745,333
1
true
0
0
The COMMIT does not actually return until the data has been... committed... so, yes, once you have committed any transaction, the work from that transaction is entirely done, as far as your application is concerned.
1
0
0
I have MySQL database where I'm loading big files which insert more than 190 000 rows. I'm using python script which is doing some stuff and then load data from csv file into mysql execute query and commit. My question is if I'm sending such a big file, is database ready after commit command or how to trigger when all datas are inserted in database?
MySQL commit trigger done
1.2
1
0
54
46,743,068
2017-10-14T09:39:00.000
0
0
1
0
python,visual-studio-code,auto-import
67,624,334
6
false
0
0
You can find it in VSCode extension store. it's name is IMPORTMAGIC. It works fantastic. It will include all modules which you use in your script. It has code action ctrl + . , which will also import library.
1
45
0
Is there a Python auto import extension/plugin available for Visual Studio Code? By auto import I mean, auto import of python modules. Eclipse and Intellij has this feature with Java.
Python auto import extension for VSCode
0
0
0
48,099
46,744,723
2017-10-14T12:45:00.000
1
0
1
0
python-3.x,pip
46,744,963
1
false
0
0
In general, the answer is no. First of all, you would have to answer hard questions like whether it is acceptable to install a new runtime on the user's computer, and whether they would appreciate that. One option is to simply bundle the Python interpreter you need in your installer. Never rely on the user's installed Python, unless perhaps they ask you to. Another option is to make your programs compatible with both Python 2 and Python 3--for some programs this is not a ton of work.
1
1
0
Is there any way to write a script and run it which asks the version of python being currently used, and if it's 2, install python3. Also is there any way to install pip using script in the similar way. New to python, sorry if there is any mistake.
Writing a python script which asks to upgrade version
0.197375
0
0
27
46,745,120
2017-10-14T13:29:00.000
1
0
0
0
excel,python-2.7,xlsxwriter
46,747,332
2
false
0
0
As far as I know that isn't possible in Excel to hide gridlines for a range. Gridlines are either on or off for the entire worksheet. As a workaround you could turn the gridlines off and then add a border to each cell where you want them displayed. As a first step you should figure out how you would do what you want to do in Excel and then apply that to an XlsxWriter program.
1
3
1
Im creating Excel file from pandas and I'm using worksheet.hide_gridlines(2) the problem that all gridlines are hide in my current worksheet.I need to hide a range of cells, for example A1:I80.How can I do that?
Set worksheet.hide_gridlines(2) to certain range of cells
0.099668
1
0
2,139
46,745,753
2017-10-14T14:41:00.000
2
0
0
0
python,django,django-models
46,745,981
1
true
1
0
Without being an absolute Django expert, here is my opinion. The Django ORM is far from being the only feature this Framework has to offer (URLs routing, test client, user sessions variables, etc.), but surely it is one the main component you want to use while working with Django since it is often directly linked to other core features of Django. If using the ORM is completely forbidden, a lot of features out of the box won't be available for you. One of the main features I can think about is the admin interface. You won't be able to use it if the ORM is not an option for you. So, in my opinion, you should go for another Framework like Flask. Mainly because without using the ORM, some of the Django value is gone. Hope it helps!
1
0
0
In a school project, my team and I have to create a shopping website with a very specific server-side architecture. We agreed to use python and turned ourselves towards Django since it seemed to offer more functionalities than other possible frameworks. Be aware that none of us ever used Django in the past. We aren't masters at deploying application on the web either (we are all learning). Here's my problem: two weeks in the project, our teacher told us that we were not allowed to use any ORM. To me, this meant bye bye to Django models and that we have to create everything on our own. Here are my questions: as we already have created all our python classes, is there any way for us to use them alongside our Django app? I have not seen any example online of people using their own python classes within a Django app. If it were possible, where should we instantiate all our objects? Would it be easier to just go with another framework (I am thinking about Flask). Am I just missing important information about how Django works and asking a dumb question? We have 4 weeks completed and 6 more to go before finishing our project. I often see online "use Flask before using Django" since it is simpler to use. We decided on Django because in the project description, Django was recommended but not Flask. Thanks for the help.
Using custom python classes alongside Django app
1.2
0
0
74
46,746,542
2017-10-14T16:03:00.000
0
1
1
0
python,python-3.x,csv
46,746,600
1
true
0
0
By importing it in the file that defines the foo function. The foo function doesn't know to look in the dictionary containing the globals you use in the REPL (where you have imported csv). It looks in the globals of it's module (there's other steps here of course), if it doesn't find it there you'll get a NameError.
1
0
0
I have a .py file containing some functions. One of the functions requires Python's csv module. Lets call it foo. Here is the thing: if I enter the python shell, import the csv module, write the defitinion of foo and use it, everything runs fine. The problem comes when I try to import foo from a custom module. If I enter the python shell, import the csv module, import the module where foo is located and try to use it, it will returns an error stating that 'csv' has not been defined (it behaves as if the csv module had not been imported). I'm wondering if I'm missing some kind of scope behaviour related to imports. How can I enable foo to use the csv module or any other module it requires? Thank you in advance
Using a module inside another module
1.2
0
0
42
46,749,037
2017-10-14T20:28:00.000
1
0
0
0
python-2.7,data-science,imputation
57,548,526
3
false
0
0
The philosophy behind splitting data into training and test sets is to have the opportunity of validating the model through fresh(ish) data, right? So, by using the same imputer on both train and test sets, you are somehow spoiling the test data, and this may cause overfitting. You CAN use the same approach to impute the missing data on both sets (in your case, the decision tree), however, you should instantiate two different models, and fit each one with its own related data.
1
7
1
Interestingly, I see a lot of different answers about this both on stackoverflow and other sites: While working on my training data set, I imputed missing values of a certain column using a decision tree model. So here's my question. Is it fair to use ALL available data (Training & Test) to make a model for imputation (not prediction) or may I only touch the training set when doing this? Also, once I begin work on my Test set, must I use only my test set data, impute using the same imputation model made in my training set, or can I use all the data available to me to retrain my imputation model? I would think so long as I didn't touch my test set for prediction model training, using the rest of the data for things like imputations would be fine. But maybe that would be breaking a fundamental rule. Thoughts?
Can I use Train AND Test data for Imputation?
0.066568
0
0
2,913
46,751,028
2017-10-15T02:04:00.000
0
0
0
0
python,css,selenium,web-scraping
46,963,554
2
false
0
0
The problem is that webdriver.io as example waits until the page has fully loaded and the loading timer in the tab is away. This is for a good reason because a lot of API´s like .getText are not working until the complete page is loaded because sometimes the element will only be loaded at the end as example. But you can reduce the loading time by: 1. You use extension like script safe or other simple script blocker that block EVERYTHING with javascript inline or external. 2. Go to chrome settings and disable everything like cookies, javascript, flash etc. just everything. 3. Go to chrome://flags and disable everything from javascript (all API´s like gamepad API ETC.) to WebGL, Canvas etc. - You can really disable everything I also have a chrome profile where I disabled everything. Now with normal Internet Speed and good CPU you can open every site in 1-3 seconds. Or alternative you can try a headless browser.
1
0
0
wanna have a script that scrapes the titles of a list of URLs, but it could be super slow if we need to wait until the whole page gets loaded. The title is the only thing I am looking for. Can we stop page loading when the title gets loaded? maybe with something like EC.title_contains.
Selenium python: How to stop page loading when the head/title gets loaded?
0
0
1
1,464
46,751,206
2017-10-15T02:45:00.000
1
0
1
0
python,regex,abbreviation
46,751,489
1
true
0
0
This is an NLP problem, but it does not impress me as a regex problem - that does not appear to be the most appropriate tool. It seems that you want to parse a token stream and identify promising tokens that potentially are abbreviations. They may, for example, be parenthesis delimited or comma delimited. Annoyingly, they may appear immediately before or after a definition phrase, once stopwords ("the", "i.e.", "after this") have been deleted. One heuristic for identifying potential abbreviations would be case-sensitive match showing non-membership in an English language dictionary. Having identified a potential abbreviation token, you'll want to scan its immediate neighborhood to see if you can explain it in terms of nearby words, ideally using just their initial letters. For a truly challenging dataset, you might try explaining DARPA backronyms. To take this in a different direction, you might try applying word2vec. Here it would be phrase2vec, and the challenge would be to scalably identify multi-word phrases with very very small cosine distance to potential abbreviation tokens.
1
2
0
For a project I am working on, I want to identify abbreviations the first time they are introduced in a text. For example: He was working for the Danish National Bank (DNB). (...) The DNB was a great employer. Should match DNB as an abbreviation for Danish National Bank. Not all abbreviations are capitals though: In 2012 the Law equal treatment of Circus Workers (after this: LetCW) was introduced. Which should return extract LetCW. What is the best approach to do this? I am currently thinking about removing "after this" and then taking the same amount of words before the brackets as there are letters in the suspected abbreviation. EDIT: Another interesting case is the abbreviation of a single word, i.e.: Abbreviation (Abbr) or Abbreviation (Abvn)
Challenging Regular Expression for Abbreviations
1.2
0
0
258
46,751,743
2017-10-15T04:36:00.000
0
0
1
1
linux,python-3.x
46,751,883
1
false
0
0
You can use 'python-' + sysconfig.get_config_var('LDVERSION'). There are numerous other variables in that module. Note that this won't work on non-cpython implementations, but they usually require special cases for building anyway.
1
0
0
I want to be pull up the name python-3.5, python3.6m, etc. When I do a "python --version" it's not in the correct format and it could be subjected to change. Is there any way to find the names generally in /usr/bin/* folder? Or should I just grep for it and assume that it'll always be in that directory for other users? I am using the command "$pkg-config python-3.6m --ldlibs --cflags" and I would like to have a dynamic way to find the "python-3.6m" in that line so that the user doesn't have to change it every time they run it on a different version of Python.
How can I find the python package name in any linux distribution?
0
0
0
101
46,752,540
2017-10-15T07:01:00.000
0
0
1
0
python,pycharm
46,752,610
1
false
0
0
You can configure python interpreter for PyCharm in Settings > Project Interpreter. Or to reconfigure default interpreter Settings > Default Project > Project Interpreter.
1
0
0
I am using pycharm community edition 2017.1.5 and I have some problem.... I am working in Python 3.6.1 but PyCharm is highlighting for python2. For example he is not allowing me to use print() function, because he is detecting print statement there. (That mean that it is only about syntax, because he know print() function but when I use it he says that it is print statement) EDIT: Its not finding print statement from python2 it says: Statement expected, found Py:PRINT_KEYWORD and it doesn't work like python2 print EDIT 2: I don't know why but when I disable Pyxl plugin it is working correctly
Pycharm is highlighting syntax for python2, but I am using python3
0
0
0
751
46,752,760
2017-10-15T07:33:00.000
0
0
0
0
python,html,graph
46,752,813
1
false
1
0
You could add a process to crontab to run the Python program every 5 minutes (assuming Linux). You could, alternatively, have the PHP call Python and await the refreshed file before responding with the page.
1
0
0
I have a website which has been built using HTML and PHP. I have a Microsoft SQL Server database. I have connected to this database and created several charts using Python. I want to be able to publish these graphs on my website and make the graphs live (so that they are refreshed every 5 minutes or so with latest data). How do I do this?
Live graphs using python on website
0
1
0
161
46,756,606
2017-10-15T15:18:00.000
2
0
0
0
python,python-3.x,machine-learning,scikit-learn
56,999,837
4
false
0
0
Short ans: RandomSplitter initiates a **random split on each chosen feature**, whereas BestSplitter goes through **all possible splits on each chosen feature**. Longer explanation: This is clear when you go thru _splitter.pyx. RandomSplitter calculates improvement only on threshold that is randomly initiated (ref. lines 761 and 801). BestSplitter goes through all possible splits in a while loop (ref. lines 436 (which is where loop starts) and 462). [Note: Lines are in relation to version 0.21.2.] As opposed to earlier responses from 15 Oct 2017 and 1 Feb 2018, RandomSplitter and BestSplitter both loop through all relevant features. This is also evident in _splitter.pyx.
2
11
1
The sklearn DecisionTreeClassifier has a attribute called "splitter" , it is set to "best" by default, what does setting it to "best" or "random" do? I couldn't find enough information from the official documentation.
What does "splitter" attribute in sklearn's DecisionTreeClassifier do?
0.099668
0
0
8,211
46,756,606
2017-10-15T15:18:00.000
4
0
0
0
python,python-3.x,machine-learning,scikit-learn
48,555,365
4
false
0
0
The "Random" setting selects a feature at random, then splits it at random and calculates the gini. It repeats this a number of times, comparing all the splits and then takes the best one. This has a few advantages: It's less computation intensive than calculating the optimal split of every feature at every leaf. It should be less prone to overfitting. The additional randomness is useful if your decision tree is a component of an ensemble method.
2
11
1
The sklearn DecisionTreeClassifier has a attribute called "splitter" , it is set to "best" by default, what does setting it to "best" or "random" do? I couldn't find enough information from the official documentation.
What does "splitter" attribute in sklearn's DecisionTreeClassifier do?
0.197375
0
0
8,211
46,759,726
2017-10-15T20:35:00.000
0
0
0
0
python,screen-scraping
46,760,481
1
false
1
0
hacking a game I see. Provided you are aware that what you are doing may diminish the validity of other's playtime as well as potentially committing a crime, I shall provide a solution: You would need to get a piece of "sniffing" software which allows modifications. The modifications are likely to be the addition of "Querystring" and "JSON" parsers to read the data traffic. At this point, you can begin learning how their particular system works, slowly replacing traffic with modified versions for your nefarious purposes. "TCP Sniffing" includes creating a "RAW TCP SOCKET" in whatever language and then repeatedly "READ'ing / RECV'ing" from that socket. The socket MUST be bound TO THE SPECIFIC NETWORK INTERFACE CARD (NIC). Hint: "LOCALHOST" and "127.0.0.1" are NOT the addresses of any NIC. You would then parse the data as a HTTP req/res stream, ensuring that you can read the contents of the frame correctly. You would then be looking to either modify the contents of the POST body or the GET querystring. Either, depending on how the game designers designed their network system.
1
0
0
I know very little about js and I'm trying to create a program that will get information about a browser based javascript game while I play it. I can't use a webdriver as I will be playing the game at the time. When I inspect the js on google chrome and look at the console, I can see all the information that I want to work with but I don't know how I can save that to a file or access it at the time in order to parse it. Preferably I'd be able to do this with python as that's what I will use for my code that will handle the info once I have it. Any help or a point in the right direction would be appreciated, thank you :) ps, I'm on Windows if that's important
How to scrape javascript while using a webpage normally?
0
0
1
36
46,761,139
2017-10-15T23:42:00.000
0
0
0
0
python-2.7,module,ttx-fonttools
46,761,612
1
false
0
1
I fixed this by installing Fonttools from the source files, instead of installing with pip.
1
0
0
I've installed, uninstalled, reinstalled FontTools and Fontmake via pip. However, whenever I try to call Fontmake in terminal I get the following error. Py23 appears to be a Fonttools dependency, which is also installed. Thanks in advance for any help! Traceback (most recent call last): File "/usr/local/bin/fontmake", line 9, in load_entry_point('fontmake==1.3.1.dev0', 'console_scripts', 'fontmake')() File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 357, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2394, in load_entry_point return ep.load() File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2108, in load entry = import(self.module_name, globals(),globals(), ['name']) File "/Library/Python/2.7/site-packages/fontmake/main.py", line 18, in from fontmake.font_project import FontProject File "/Library/Python/2.7/site-packages/fontmake/font_project.py", line 37, in from defcon import Font File "/Library/Python/2.7/site-packages/defcon/init.py", line 10, in from defcon.objects.font import Font File "/Library/Python/2.7/site-packages/defcon/objects/font.py", line 6, in from ufoLib import UFOReader, UFOWriter File "/Library/Python/2.7/site-packages/ufoLib/init.py", line 6, in from fontTools.misc.py23 import basestring, unicode ImportError: No module named py23
Error message after installing fontmake: "No module named py23"
0
0
0
264
46,766,899
2017-10-16T09:22:00.000
0
1
0
0
python,testing,timeout,pytest
58,541,728
2
false
0
0
This has been fully supported by pytest-timeout right from the beginning, you want to use the signal method as described in the readme of pytest-timeout. Please do read the readme carefully as it comes with some caveats. And indeed it is implemented using SIGALRM as the other answer also suggests, but it already exists so no need to re-do this.
1
4
0
I know in pytest-timeout I can specify timeout for each testcase but single failure terminates whole test run instead failing the slacking off testcase. Am I forced to make my own solution of this or there are ready-to-use tools which provide that?
pytest-timeout - fail test instead killing whole test run
0
0
0
3,317
46,767,001
2017-10-16T09:28:00.000
0
0
0
0
python-3.x,tensorflow,cudnn
48,411,184
2
true
0
0
In short: cudnnGRU and cudnnLSTM can/ must be used on GPU, normal rnn implementations not. So if you have tensorflow-gpu, cudnn implementation of RNN cells would run faster.
1
0
1
To create RNN cells, there are classes like GRUCell and LSTMCell which can be used later to create RNN layers. And also there are 2 other classes as CudnnGRU and CudnnLSTM which can be directly used to create RNN layers. In the documentation they say that the latter classes have cuDNN implementation. Why should I use or not use this cuDNN implemented classes over classical RNN implementations when I'm creating a RNN model..?
What is cuDNN implementation of rnn cells in Tensorflow
1.2
0
0
1,107
46,768,006
2017-10-16T10:21:00.000
0
0
0
1
python,linux,python-2.7
46,769,357
2
false
0
0
I like subprocess.Popen, but it has troubles (maybe it can't) to deal with '>' ==> unconvenient if you have a '>' in the command line otherwise subprocess.check_output
1
1
0
I have a python script that contains a Linux shell command. I'm using subprocess.check_output. My question is about the faster python method to execute a Linux shell command from python script like os.system().
execute linux shell command from python script
0
0
0
107
46,768,213
2017-10-16T10:31:00.000
3
0
1
0
python,fingerprinting,function-definition
52,685,427
1
true
0
0
All you’re looking for is a hash procedure that includes all the salient details of the class’s definition. (Base classes can be included by including their definitions recursively.) To minimize false matches, the basic idea is to apply a wide (cryptographic) hash to a serialization of your class. So start with pickle: it supports more types than hash and, when it uses identity, it uses a reproducible identity based on name. This makes it a good candidate for the base case of a recursive strategy: deal with the functions and classes whose contents are important and let it handle any ancillary objects referenced. So define a serialization by cases. Call an object special if it falls under any case below but the last. For a tuple deemed to contain special objects: The character t The serialization of its len The serialization of each element, in order For a dict deemed to contain special objects: The character d The serialization of its len The serialization of each name and value, in sorted order For a class whose definition is salient: The character C The serialization of its __bases__ The serialization of its vars For a function whose definition is salient: The character f The serialization of its __defaults__ The serialization of its __kwdefaults__ (in Python 3) The serialization of its __closure__ (but with cell values instead of the cells themselves) The serialization of its vars The serialization of its __code__ For a code object (since pickle doesn’t support them at all): The character c The serializations of its co_argcount, co_nlocals, co_flags, co_code, co_consts, co_names, co_freevars, and co_cellvars, in that order; none of these are ever special For a static or class method object: The character s or m The serialization of its __func__ For a property: The character p The serializations of its fget, fset, and fdel, in that order For any other object: pickle.dumps(x,-1) (You never actually store all this: just create a hashlib object of your choice in the top-level function, and in the recursive part update it with each piece of the serialization in turn.) The type tags are to avoid collisions and in particular to be prefix-free. Binary pickles are already prefix-free. You can base the decision about a container on a deterministic analysis of its contents (even if heuristic) or on context, so long as you’re consistent. As always, there is something of an art to balancing false positives against false negatives: for a function, you could include __globals__ (with pruning of objects already serialized to avoid large if not infinite serializations) or just any __name__ found therein. Omitting co_varnames ignores renaming local variables, which is good unless introspection is important; similarly for co_filename and co_name. You may need to support more types: look for static attributes and default arguments that don’t pickle correctly (because they contain references to special types) or at all. Note of course that some types (like file objects) are unpicklable because it’s difficult or impossible to serialize them (although unlike pickle you can handle lambdas just like any other function once you’ve done code objects). At some risk of false matches, you can choose to serialize just the type of such objects (as always, prefixed with a character ? to distinguish from actually having the type in that position).
1
4
0
Background When experimenting with machine learning, I often reuse models trained previously, by means of pickling/unpickling. However, when working on the feature-extraction part, it's a challenge not to confuse different models. Therefore, I want to add a check that ensures that the model was trained using exactly the same feature-extraction procedure as the test data. Problem My idea was the following: Along with the model, I'd include in the pickle dump a hash value which fingerprints the feature-extraction procedure. When training a model or using it for prediction/testing, the model wrapper is given a feature-extraction class that conforms to certain protocol. Using hash() on that class won't work, of course, as it isn't persistent across calls. So I thought I could maybe find the source file where the class is defined, and get a hash value from that file. However, there might be a way to get a stable hash value from the class’s in-memory contents directly. This would have two advantages: It would also work if no source file can be found. And it would probably ignore irrelevant changes to the source file (eg. fixing a typo in the module docstring). Do classes have a code object that could be used here?
How to hash a class or function definition?
1.2
0
0
545
46,773,522
2017-10-16T15:12:00.000
1
0
0
1
python,cassandra,cassandra-python-driver
46,839,220
3
false
0
0
Have you considered creating a decorator for your execute or equivalent (e.g. execute_concurrent) that logs the CQL query used for your statement or prepared statement? You can write this in a manner that the CQL query is only logged if the query was executed successfully.
1
10
0
I'm trying to find a way to log all queries done on a Cassandra from a python code. Specifically logging as they're done executing using a BatchStatement Are there any hooks or callbacks I can use to log this?
Logging all queries with cassandra-python-driver
0.066568
1
0
2,113
46,774,670
2017-10-16T16:13:00.000
0
1
1
0
python-3.x,pyaudio
46,991,552
1
false
1
0
The above issue was happening with Raspberry NOOBS image. I ended up downloading the RASPBIAN image and finally it is working though pyaudio is printing too many warning messages.
1
1
0
I have installed pyaudio (latest as of today Oct 16, 2017) in my Raspberry PI 3 with "sudo pip3 install pyaudio". I am running "python3" for the code below: import pyaudio p = pyaudio.PyAudio() print("Number of devices={}".format(p.get_device_count())) This prints Number of devices=0 Does anyone have the same problem? Need help to resolve this issue. Additional info: "lsusb" prints all the devices. I am able to see the device in alsamixer. I am able to test that device works. Looks like pyaudio & python3 may have something to do.
Raspberry Pi 3 pyaduio does not detect any devices
0
0
0
53
46,775,155
2017-10-16T16:41:00.000
0
0
0
0
python,machine-learning,scikit-learn,lda
58,364,407
3
false
0
0
In case you are using new version and using from sklearn.qda import QDA it will give error, try from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
1
8
1
When I run classifier.py in the openface demos directory using: classifier.py train ./generated-embeddings/ I get the following error message: --> from sklearn.lda import LDA ModuleNotFoundError: No module named 'sklearn.lda'. I think to have correctly installed sklearn. What could be the reason for this message?
ImportError: No module named 'sklearn.lda'
0
0
0
15,902