Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
46,501,196
2017-09-30T08:51:00.000
0
0
1
0
python,user-interface
56,342,507
2
false
1
0
if you have windows you can open cmd and type pip install appjar or if you are using a IDE that has terminal it is not important which operating system you are using. just type pip install appjar in IDE terminal.
1
0
0
I am very new to coding and was trying to create a gui using appJar in atom. I followed the steps on the appJar website but whenever I run the code this is the message it gives: ImportError: No module named appJar What do i do? Also am I using appJar correctly?
ImportError: No module named appJar
0
0
0
1,639
46,501,369
2017-09-30T09:15:00.000
1
0
1
0
python,cuda,numba
46,503,585
1
false
0
0
To the best of my knowledge, the cuda.jit facility offered by numba does not allow passing of arguments to the CUDA assembler which would allow control of register allocation, as is possible with the native CUDA toolchain. So I don't think there is a way to do what you have asked about.
1
0
0
as the title says I would like to know if there is a way to limit the number of registers used by each thread when I launch a kernel. I'm performing a lot of computation on each thread and so the number of registers used is too high and then the occupancy is low. I would like to try to reduce the number of registers used in order to try to improve parallel thread execution, maybe at the cost of more memory accesses. I searched for the answer but I didn't find a solution. I think that is possible to set a maximum number of registers used by thread with the CUDA toolchain, but is it also possible when using Numba? EDIT: Maybe also forcing a minimum numbers of blocks to be executed in a multi processor in order to force the compiler to reduce the number of used registers.
How to limit the number of registers used by each thread in Numba (CUDA)
0.197375
0
0
210
46,504,753
2017-09-30T16:28:00.000
-1
0
1
0
python,string,strip,punctuation
46,505,165
2
false
0
0
re.sub(r'[,;\.\!]+$', '', 'hello. world!!!')
1
1
0
I know I can use .translate(None, string.punctuation) to strip punctuation from a string. However, I'm wondering if there's a way to strip punctuation only if it it's the final character. For example: However, only strip the final punctuation. -> However, only strip the final punctuation and This is sentence one. This is sentence two! -> This is sentence one. This is sentence two and This sentence has three exclamation marks!!! -> This sentence has three exclamation marks I know I could write a while loop to do this but I'm wondering if there's a more elegant/efficient way.
How to strip punctionation only if it's the last character
-0.099668
0
0
789
46,505,052
2017-09-30T17:04:00.000
0
0
1
0
python-3.x,opencv,image-processing
46,505,170
2
false
0
0
The task consists of following steps, Having the images in a directory e.g. foo/ Getting the list of all images in the foo/ directory Lopp over the list of images 3.1. img = cv2.imread(images(i),0) 3.2. ProcessImage(img) #Run an arbitrary function on the image 3.3. filename = 'test' + str(i) +'.png' 3.4. cv2.imwrite(filename, img) End of the loop
1
1
0
I am trying to build the code using python, for which I need to process at least 50 images. So how should I read the images one by one and process it. Is it possible using a loop and do i need to create a separate database for this or just saving all the images in separate file will do?
processing multiple images in sequence in opencv python
0
0
0
4,075
46,509,601
2017-10-01T05:47:00.000
0
0
0
0
python-3.x,matrix,tensorflow,ubuntu-16.04,tensorflow-gpu
46,509,800
1
false
0
0
You can divide data set into batches and then process you model or you can use tensor flow queue
1
0
1
I am trying to model a neural network using tensorflow. But the matrices are in the order of 800000x300000.When I initialize the variables using global variable initializer in tensorflow, the system freezes. How to do deal with this problem? Could tensorflow with gpu support will be able to handle this large matrix?
Tensorflow hangs when initializing large matrix with variable.Whats the best solution for handling large matrix multiplication in tensorflow?
0
0
0
125
46,516,325
2017-10-01T19:56:00.000
0
0
0
0
python,machine-learning,cluster-analysis,categorical-data
51,921,509
1
false
0
0
Agreeing with @DIMKOIM, Multiple Correspondence Analysis is your best bet. PCA is mainly used for continuous variables. To visualize your data, you can build a scatter plot from scratch.
1
0
1
I have a high-dimensional dataset which is categorical in nature and I have used Kmodes to identify clusters, I want to visualize the clusters, what would be the best way to do that? PCA doesn't seem to be a recommended method for dimensionality reduction in a categorical dataset, how to visualize in such a scenario?
How to plot a cluster in python prepared using categorical data
0
0
0
1,437
46,516,782
2017-10-01T20:52:00.000
0
0
0
0
python,opencv,wait,python-2.x
46,517,378
1
true
0
0
I was able to solve the problem by putting what I want to run before resuming processes (playing the sound) in another script, performing import sound and breaking out of the loop to stop the program, and I can't figure out how to start it again. For my purposes, I can restart it manually.
1
0
1
I have a motion detection program in OpenCV, and I want it to play a sound when it detects motion. I use winsound. However, OpenCV still seems to be gathering frames while the sound is playing, so I want to know a way to stop all OpenCV processes for about 17 seconds. I tried time.sleep and running it with the -u tag. Neither worked. Any ideas? Thanks.
Stop OpenCV for seventeen seconds? python
1.2
0
0
32
46,517,118
2017-10-01T21:33:00.000
3
0
0
0
python,numpy,keras,loss
46,517,906
1
true
0
0
No, gradients are needed to perform gradient descent, so if you only have a numerical loss, it cannot be differentiated, in contrast to a symbolic loss that is required by Keras. Your only chance is to implement your loss using keras.backend functions or to use another Deep Learning framework that might let you specify the gradient manually. You still would need to compute the gradient somehow.
1
2
1
I have a loss function implemented that uses numpy and opencv methods. This function also uses the input image and the output of the network. Is it possible to convert the input and the output layers to numpy arrays, compute the loss and use it to optimize the network?
Loss layer on Keras using two input layers and numpy operations
1.2
0
0
228
46,518,440
2017-10-02T01:23:00.000
0
0
1
0
python,exception
46,518,479
2
false
0
0
when called methods, Immediately check the arguments. However, if you only use the function internally, you do not need to raise an error. It is a bad habit to use a lot of things that cause errors.
2
0
0
I'm working on a API which for the most part a wrapper around NumPy. In some cases, the wrapper method just calls a NumPy method and returns what the NumPy method returns. I those cases, what is better practice, should the wrapper methods validate arguments and raise error or should they should pass the arguments to NumPy and let NumPy raise the exception?
When to raise my own exceptions in Python
0
0
0
55
46,518,440
2017-10-02T01:23:00.000
1
0
1
0
python,exception
46,518,467
2
false
0
0
If your API has additional requirements for input validation then it is appropriate to raise exceptions, otherwise you could just let the input be passed to NumPy and have NumPy raise the input validation exceptions.
2
0
0
I'm working on a API which for the most part a wrapper around NumPy. In some cases, the wrapper method just calls a NumPy method and returns what the NumPy method returns. I those cases, what is better practice, should the wrapper methods validate arguments and raise error or should they should pass the arguments to NumPy and let NumPy raise the exception?
When to raise my own exceptions in Python
0.099668
0
0
55
46,520,136
2017-10-02T05:51:00.000
2
0
1
0
java,python
46,520,173
2
true
1
0
There are at least three ways to achieve that. a) You could use java.lang.Runtime (as in Runtime.getRuntime().exec(...)) to launch an external process (from Java side), the external process being your Python script. b) You could do the same as a), just using Python as launcher. c) You could use some Python-Java binding, and from Java side use a separate Thread to run your Python code.
1
0
0
Is there any way to run a function from python and a function from java in one app in parallel and get the result of each function to do another process?
Run two pieces of code from different programming language in parallel
1.2
0
0
62
46,520,193
2017-10-02T05:57:00.000
0
0
0
0
python,python-requests,twitch
46,521,360
1
true
0
0
There are a lot of reasons for this error and main of them is - you violate Twitch's user policy (which directly prohibit using scrapers) and server banned some of your requests. You should try to use sessions when you access site: session = requests.Session() and use session.get instead of requests.get Another things to try are limit your requests rate and rotate different sessions with different headers (don't mix headers and sessions).
1
0
0
I have an issue with requests lib : With a code like requests.get("HTTPS://api.twitch.tv/helix/...", headers = headers), with the information that twitch API needs in the variable "headers". And unfortunately, with except Exception, e: print(e) I get ('Connection aborted.', BadStatusLine("''",)). I already tried to fake my user agent. I'm almost sure that it isn't from server (Twitch) because I also use the ancient API and I have the same bug, while I already used it successfully (Since that, I reseted my Raspberry, it may can explain...). It doesn't do this error every requests, but like 1 on 10 so it's a bit embarrassing. I also have this error only with Raspbian, but not with Windows. Thanks for helping me, a young lost coder.
Python - requests lib - error ('Connection aborted.', BadStatusLine("''",))
1.2
0
1
960
46,524,471
2017-10-02T11:19:00.000
0
0
1
0
python,turtle-graphics
62,462,915
3
false
0
0
penup() is the full name, and up() is what's called an alias. An alias is what coders use to condense long functions. Say you have importFuncWhenClicked(object, execution file, program) you can condense it to something like: importFOnClick(obj, exc, prgmFile)
2
1
0
Is there any difference between penup() and up() in turtle python? I used both the methods in a simple animation program and found no difference.
Difference between penup() and up() in turtle python?
0
0
0
1,792
46,524,471
2017-10-02T11:19:00.000
0
0
1
0
python,turtle-graphics
55,073,030
3
false
0
0
They are the same thing, just in different wording. penup() == up()
2
1
0
Is there any difference between penup() and up() in turtle python? I used both the methods in a simple animation program and found no difference.
Difference between penup() and up() in turtle python?
0
0
0
1,792
46,526,112
2017-10-02T13:08:00.000
0
0
1
0
python,bash,datetime,plot,storing-data
46,526,690
1
false
0
0
Firstly, if you plan on accessing the data structure which holds the URL-(time,Price) for specific URLs - use dictionary, since URLs are unique (the URL will be the key in the dictionary). Otherwise u can keep a list of (URL,(time,Price)) tuples. Secondly, a list of (Time, Price) Tuples since you don't need to sort them (they will be sorted already by the way you insert them). {} - Dictionary [] - List () - tuple Option 1: [(URL, [(time, Price)])] Option 2: {URL, [(time, Price)]}
1
0
1
I'm trying to learn Python by converting a crude bash script of mine. It runs every 5 minutes, and basically does the the following: Loops line-by-line through a .csv file, grabs first element ($URL), Fetches $URL with wget and extracts $price from the page, Adds $price to the end of the same line in the .csv file, Continues looping for the remaining lines. I use it to keep track of products prices on eBay and similar websites. It notifies me when a lower price is found and plots graphs with the product's price history. It's simple enough that I could just replicate the algorithm in Python, however, as I'm trying to learn it, it seems there are several types of objects (lists, dicts, etc.) that could do the storing much more efficiently. My plan is using pickle or even a simple DB solution (like dataset) from the beggining, instead of messing around with .csv files and extracting the data via sketchy string manipulations. One of the improvements I would also like to make is store the absolute time of each fetch alongside its price, so I can plot a "true timed" graph, instead of assuming each cycle is 5 minutes away from each other (which it never is). Therefore, my question sums to... Assuming I need to work with the following data structure: List of Products, each with its respective URL And its list of Time<->Prices pairs What would be the best strategy in Python for doing it so? Should I use dictionaries, lists, sets or maybe even creating a custom class for products?
Best strategy for storing timed data and plotting afterwards?
0
0
0
34
46,528,522
2017-10-02T15:19:00.000
0
0
0
0
python-requests
46,528,848
2
false
1
0
You need to learn about python GUI libraries like PyQt5 and instale it under your python libraries so that through QtWebEngineWidget and QWebView you can render your page to display it.
1
2
0
I am trying to open a page and get some data from it, using Requests and BeautifulSoup. How can I see the response of a requests.post in my browser as a page instead of the source code?
python requests: show response in browser
0
0
1
987
46,529,092
2017-10-02T15:51:00.000
-2
0
1
0
python,python-3.x,security,string-formatting,user-input
46,529,232
1
false
0
0
The operation itself is not risky, event if the user enters a Python vaid expression - that is not eval ed. If the result is intended to go in a SQL database, you should consider quoting it before building the SQL string.
1
2
0
Would it be risky to format any string a user entered with Python's format() function, with arguments/values coming from user input too ? (e.g. user_input_string.format(*user_input_args))
Is allowing user input in python's format() a security risk?
-0.379949
0
0
525
46,531,265
2017-10-02T18:12:00.000
1
0
0
0
python,django,web-applications,python-multithreading,clips
46,542,127
2
false
1
0
It is usually a good idea to split your Expert System into separate "shards". It keeps the rule base simpler (as you don't need to distinguish to which user a fact is referring to) and allows you to scale horizontally when more users will be added. If running one ES per user sounds overkill, you can decrease the granularity by sharding based on, for example, the first letter of the user surname or Id. When designing similar solutions I tend to de-couple the frontend application with the ES by using a queueing system. This allows you to modify the cluster layout without the need to change the public APIs. | Flask | ----> | RabbitMQ | ----> | ES Worker | In case you want to change your sharding strategy, you can simply re-configure the broker queues layout without affecting the client/end-user.
1
5
0
I am relative new to developing web applications. I would like your comments and suggestions of improvement for the following architectural considerations. I have developed an expert system ES using CLIPS. Now I am planning to provide this to variety of users from our company as a web application. Before I start with going into greater details I am currently thinking about which technologies should be involved. The goal is that the user of the web app is facing a chat-like animation which guides him to the final result while he or she provides more and more input to the ES. After conducting some research on my own I came up with the following idea In the backend I use PyCLIPS as an Interface between Python and CLIPS Then I use DJANGO for integrating my python code into the web page dynamically altering the chat between user and ES. There is one thing which is particularly still troubling me a lot: How shall I manage many concurrent users? Shall I use one ES with every user having an individual set of facts or shall every user have his or her own instance of the ES? Do you have any other high level approaches for this problem which could be superior to this one? I am looking forward to your experience and input regarding this matter. Best
Choice of architecture for exposing CLIPS expert system as web application
0.099668
0
0
371
46,531,585
2017-10-02T18:34:00.000
0
1
1
0
python,ios,swift
46,531,927
2
false
0
0
You can't do that. Apple does not allow it.
2
0
0
I want to write an app that has the ability to call a script of some language from an IOS app at runtime. From what I have found so far this is not possible unless I do something like embedding Python in the app. Basically I want to have the ability to add an object with some properties and functions to a database online. The user will then be able to download the object to their IOS device and the object and its functionality will be added to the app. There might be hundreds of these scripts available so hard-coding them in Swift seems impractical.
Calling a script (some language) from a Swift app at Runtime
0
0
0
90
46,531,585
2017-10-02T18:34:00.000
0
1
1
0
python,ios,swift
46,532,638
2
false
0
0
It seems obvious, but you can probably run scripts as JS and catch the result from them - like put your JS to some endpoint and execute it. But for your purpose I can't think about this solution as a good idea.
2
0
0
I want to write an app that has the ability to call a script of some language from an IOS app at runtime. From what I have found so far this is not possible unless I do something like embedding Python in the app. Basically I want to have the ability to add an object with some properties and functions to a database online. The user will then be able to download the object to their IOS device and the object and its functionality will be added to the app. There might be hundreds of these scripts available so hard-coding them in Swift seems impractical.
Calling a script (some language) from a Swift app at Runtime
0
0
0
90
46,535,012
2017-10-02T23:17:00.000
1
0
0
0
python,cassandra,zip
47,168,055
1
false
0
0
If your files are comma-delimited and match the table schema (or can be made so using any variety of command line tools), you might consider piping the unzip output to cqlsh --execute 'COPY ks.table FROM STDIN'
1
0
0
I have a huge number of zipped files(named by time stamp), which are essentially delimited text files when unzipped. I have to get all this data into Cassandra(one time dump). As the number of zip files is huge, is there a way I can redirect the extracted file to Cassandra directly instead of storing it again on local before loading to Cassandra?(I'm using python for this)
Data from Zipped file to Cassandra
0.197375
0
0
185
46,536,893
2017-10-03T03:44:00.000
0
0
1
0
python,ubuntu,tensorflow,pycharm,tensorflow-gpu
50,614,889
1
true
0
0
Actually the problem was, the python environment for the pycharm project is not the same as which is in run configurations. This issue was fixed by changing the environment in run configurations.
1
1
1
When i'm running my tensorflow training module in pycharm IDE in Ubuntu 16.04, it doesn't show any training with GPU and it trains usually with CPU. But When i run the same python script using terminal it runs using GPU training. I want to know how to configure GPU training in Pycharm IDE.
Tensorflow GPU doesn't work in Pycharm
1.2
0
0
691
46,536,979
2017-10-03T03:55:00.000
1
0
1
0
python,loops
46,537,097
3
false
0
0
While not a perfect solution, could you take the average player skill and rank the n players based on this average. Then based on these values, use some heuristic to try and "balance" out these players across the two teams. Simple example would be to assign teams like so (highest ranked = 10, lowest ranked = 1) Team 1 = 10 7 6 3 1 Team 2 = 9 8 5 4 2 Again, not perfect, but much less expensive than 10! search.
1
2
0
I have a list of 10 players, each with a respective skill score. I'm trying to organise them into 2 teams of 5, where the total skill score of each team is as close as possible. Iterating over every combination is obviously not very efficient as identical teams will occur. Is there a python library or function that can either efficiently solve this problem or at least just iterate over the correct combinations? Iterating over 10! combinations isn't so bad if that is the easiest answer.
python iterate over team combinations
0.066568
0
0
984
46,538,313
2017-10-03T06:23:00.000
2
0
0
0
python,odbc,teradata
49,409,451
1
false
0
0
I had exactly the same question. As the rest web server is active for us, I just run a few tests. I tested PyTD with rest and odbc back ends, and jdbc using jaydebeapi + Jpype1. I used Python 3.5, CentOS 7 machine, I got similar results with python 3.6 on centos and on windows. Rest was the fastest and jdbc was the slowest. It is interesting, because in R JDBC was really fast. That probably means JPype is the bottleneck. Rest was also very fast for writing, but my guess is that could be improved in JDBC using prepared statements appropriately. We are now going to switch to rest for production. Let's see how it goes, it is also not problem free for sure. Another advantage is that our analysts want also to work on their own pcs/macs and rest is the easiest to install particularly on windows (you do pip install teradata and you are done, while for odbc and jaydebeapi+Jpype you need a compiler, and with odbc spend some time to get it configured right). If speed is critical I guess another way would be to write a java command line app that fetches the rows, writes them to a csv and then read the csv from python. I did not test, but based on my previous experience on these kind of issues I bet that is going to be faster than anything else. Selecting 1M rows Python 3- JDBC: 24 min Python 3- ODBC: 6.5 min Python 3- Rest: 4 min R - JDBC: 35 s Selecting 100 K rows Python 3- JDBC 141 s Python 3- ODBC 41 s Python 3- Rest 16 s R - JDBC 5 s Inserting 100 K Rows Python 3- JDBC got errors, too lazy to correct them Python 3- ODBC 7 min Python 3- Rest 8 s (batch) 9 min (no batch) R - JDBC 8 min
1
1
0
I'm planning on using the Teradata Python module, which can use either the Teradata REST API or ODBC to connect to Teradata. I'm wondering what the performance would be like for REST vs. ODBC connection methods for fairly large data pulls (> 1 million rows, > 1 GB of results). Information on Teradata's site suggests that the use case for the REST API is more for direct access of Teradata by browsers or web applications, which implies to me that it may not be optimized for queries that return more data than a browser would be expected to handle. I also wonder if JSON overhead will make it less efficient than the ODBC data format for sending query results over the network. Does anyone have experience with Teradata REST services performance or can point to any comparisons between REST and ODBC for Teradata?
How does the Teradata REST API performance compare to other means of querying Teradata?
0.379949
0
1
2,042
46,539,472
2017-10-03T07:39:00.000
1
0
1
0
python
60,165,918
1
false
0
0
actually the only way to run the python files with out installing c++ packages is to compile your python script with python 3.4 install python 3.4 on your computer and : 1.install pypiwin32 version 219(its needed for installing pyinstaller): pip install pypiwin32==219 2.installing pyinstaller version 3 : pip install pyinstaller==3.0 and have fun with your compiled code :)
1
1
0
I have .exe compiled from python file using PyInstaller the .exe file work good on win10 but it cause error (the program can't start because api-ms-win-crt-runtime-I1-1-0.dll is missing from you computer) when run it on win7 or win8.1 machines.
Is it possible to run an executable compiled with PyInstaller without installing the Visual C++ Redistributable Packages?
0.197375
0
0
384
46,543,779
2017-10-03T11:42:00.000
-1
0
0
0
python,scipy,data-mining,binning
46,578,602
2
false
0
0
Don't expect everything to require a library. Both bombings can be implemented in 1 or 2 lines of Python code if you think about them for a minute. It probably takes you longer to find/install/study a library than to just write this code yourself.
1
1
1
I have wound several examples of equal-mean binning, using scipy, but I wondering if it is possible to use library for equal-width or -depth binning. Actually, I'm fine using other libraries, not only scipy
Equal-width and equal-depth binning, using scipy
-0.099668
0
0
1,881
46,544,518
2017-10-03T12:22:00.000
0
0
0
0
python,django,database,postgresql,sqlite
46,544,581
1
false
1
0
better create the postgres database. write down the python script which take the data from the mysql database and import in postgres database.
1
0
0
I'm updating from an ancient language to Django. I want to keep the data from the old project into the new. But old project is mySQL. And I'm currently using SQLite3 in dev mode. But read that postgreSQL is most capable. So first question is: Is it better to set up postgreSQL while in development. Or is it an easy transition to postgreSQL from SQLite3? And for the data in the old project. I am bumping up the table structure from the old mySQL structure. Since it got many relation db's. And this is handled internally with foreignkey and manytomany in SQLite3 (same in postgreSQL I guess). So I'm thinking about how to transfer the data. It's not really much data. Maybe 3-5.000 rows. Problem is that I don't want to have same table structure. So a import would be a terrible idea. I want to have the sweet functionality provided by SQLite3/postgreSQL. One idea I had was to join all the data and create a nested json for each post. And then define into what table so the relations are kept. But this is just my guessing. So I'm asking you if there is a proper way to do this? Thanks!
Importing data from multiple related tables in mySQL to SQLite3 or postgreSQL
0
1
0
31
46,545,454
2017-10-03T13:12:00.000
0
0
0
0
python,kivy,buildozer
46,575,561
1
false
0
1
The error is the 'error: unrecognized arguments: --sdk 19' part, the rest is not important. The problem arises from a regression in python-for-android, as this argument was removed but is still passed by buildozer. I've re-added the argument (with a deprecation warning) and created a PR to stop buildozer calling it anyway. This means that if you clean everything and try again, the error should no longer occur.
1
0
0
I'm using Buildozer to convert a python file to android APK (using Kivy) and it gets quite far through the process but then errors. Any ideas what is causing this error at the end? toolchain.py: error: unrecognized arguments: --sdk 19 Could not find hostpython, will not compile to .pyo (this is normal with python3) Command failed: /usr/bin/python -m pythonforandroid.toolchain apk --debug --bootstrap=sdl2 --dist_name KivyTest --name KivyApp --version 0.1 --package doublejgames.com.kivytest --android_api 19 --sdk 19 --minsdk 9 --private /home/kivy/Desktop/.buildozer/android/app --orientation landscape --copy-libs --arch armeabi-v7a --color=always --storage-dir=/home/kivy/Desktop/.buildozer/android/platform/build This seems to be the main error: toolchain.py: error: unrecognized arguments: --sdk 19 Could not find hostpython, will not compile to .pyo (this is normal with python3) In my buildozer.spec file, I'm using the requirements: requirements = kivy, python3crystax==3.6 I also tried just requirements = kivy, python3crystax Any help would be appreciated! Thanks.
Could not find hostpython, will not compile to .pyo (Buildozer python-to-android)
0
0
0
750
46,546,388
2017-10-03T13:57:00.000
0
0
0
0
python,csv,google-api,google-bigquery,google-python-api
46,546,554
1
true
0
0
You can use pandas library for that. import pandas as pd data = pd.read_csv('input_data.csv') useful_columns = [col1, col2, ... ] # List the columns you need data[useful_columns].to_csv('result_data.csv', index=False) # index=False is to prevent creating extra column
1
1
1
I am trying to upload data from certain fields in a CSV file to an already existing table. From my understanding, the way to do this is to create a new table and then append the relevant columns of the newly created table to the corresponding columns of the main table. How exactly do I append certain columns of data from one table to another? As in, what specific commands? I am using the bigquery api and the python-client-library.
How to Skip Columns of CSV file
1.2
1
0
4,039
46,547,897
2017-10-03T15:11:00.000
2
0
0
0
python,django,geolocation,geoip
46,548,014
1
false
1
0
First of all, You can NOT get exact location of user by IP. Some IPs of ISPs are not relevant with user location but with their IDC location. So If you "really" want your client's location, you should use client(browser)'s GeoLocation API. (Fron-end) What you have to do is.. get user location by Geolocation API post user location to your server return location-based information update your webpage(DOM) with info.
1
0
0
I have a really basic django app to get weather. I need to get user's location to show them their current location's weather. I am using GeoIP for that. But an issue has come up that GeoIP does not have information of all the IP addresses. It returns NoneType for those IP addresses. I want to know if there is any other precise way by which I can info about User's current latitude and longitude, like maybe browser API? It should not miss any User's location like GeoIP does.
django - How to get location through browser IP
0.379949
0
1
1,361
46,549,832
2017-10-03T17:03:00.000
1
0
1
0
python,anaconda
46,600,829
9
false
0
0
pip some-package is installing for the root anaconda environment because it is using pip from anaconda library. Anaconda add anaconda root dir to path before /usr/bin/ . so when you use pip if finds it on anaconda root. check path of pip using which pip, this will tell you complete path of pip. You can install it on default python using /usr/bin/python -m pip install some-package. or use /path/to/default/pip install some-package.
2
10
0
For development in Python, I am using Miniconda on my Mac with macos Sierra. However, I have to use a framework that only works with the default Python (present at /usr/bin/python). My question is: How can I install packages for this default Python? If I use pip some-package, this will automatically install the package for the root conda environment. EDIT: As discussed in the comments, I agree it is a bad idea to mess with the system default version of Python. Instead, I would like this SDK to work in a conda environment or with Python 2.7 installed from python.org. However, none of these seem to work! How to get this working?
Use default Python while having Anaconda
0.022219
0
0
5,355
46,549,832
2017-10-03T17:03:00.000
1
0
1
0
python,anaconda
46,674,074
9
false
0
0
I too have a Mac with Sierra. Assumption: Let's say you have Anaconda. Now you would have DefaultPython(python2) and say this Anaconda is of Python3. Secret: The way any shell/default python selected is with PATH variable set in a shell. So when you install Anaconda, installer shall try to set new path variables to your default bash shell Solution: Have one python (2 or 3) as default. For the less used one, try using full path. If you have to use Cheat code: create /usr/bin/python as symbolic link to the python's actual repo(save the old one). This can be altered according to your usage. Hope this works!
2
10
0
For development in Python, I am using Miniconda on my Mac with macos Sierra. However, I have to use a framework that only works with the default Python (present at /usr/bin/python). My question is: How can I install packages for this default Python? If I use pip some-package, this will automatically install the package for the root conda environment. EDIT: As discussed in the comments, I agree it is a bad idea to mess with the system default version of Python. Instead, I would like this SDK to work in a conda environment or with Python 2.7 installed from python.org. However, none of these seem to work! How to get this working?
Use default Python while having Anaconda
0.022219
0
0
5,355
46,551,551
2017-10-03T18:56:00.000
8
0
1
0
python,ssh,jupyter,tmux,gnu-screen
58,684,998
5
false
0
0
Just right click on the jupyter notebook logo in the currently running server, you probably have a server running already, then click on copy link, then paste the link in a text editor, maybe MS word, you will see the token in the link, copy and paste where token is required. It will work.
1
91
0
How do you check the login tokens for all running jupyter notebook instances? Example: you have a notebook running in tmux or screen permanently, and login in remotely through ssh. Sometimes, particularly if you're logging in after a long time, the token is requested again in order to access the notebook session. How do you get hold of the token without having to kill and restart the notebook session with a new token?
List running Jupyter notebooks and tokens
1
0
0
124,609
46,551,938
2017-10-03T19:25:00.000
0
0
1
0
python,unit-testing,gcc,code-coverage,gcovr
48,336,451
1
false
0
0
Try gcovr -r ./ and gcovr -r ./ --branches from the root folder where you have your .gcno and .gcda files to get the summary of line and branch coverage respectively
1
0
0
I'm using gcov already to get code coverage results. I want to use gcovr. I've installed the gcovr package from Cygwin. Now, I've never used Python. I'm confused because I've got C:\cygwin\lib\python2.7\site-packages\gcovr with init.py, init.pyc and init.pyo files Under C:\cygwin\bin I've got a gcovr file and also python.exe I ran python.exe from command prompt and it says Python 2.7.13 (default, Mar 14 2017, 23:27:55) [GCC 5.4.0] on cygwin (is this what I have to use for gcovr?) I tried >>>python2.7 gcovr with the full gcovr path mentioned above and I get SyntaxError: invalid syntax I tried >>>gcovr and I get Traceback (most recent call last): File (stdin); line 1, in (module) NameError: name 'gcovr' is not defined I went through the help utility and looked at all the modules, gcovr was one of them. I've seen usage like ../../../scripts/gcovr -r but I don't have a scripts folder. What am I missing? What steps should I follow?
gcovr: How to use Cygwin gcovr package?
0
0
0
871
46,553,070
2017-10-03T20:41:00.000
1
0
0
0
python,sql,django,mongodb,postgresql
46,553,762
1
false
1
0
[GENERAL ADVICE]: I always use Postgres or MySQL as the django ORM connection and then Mongo or DynamoDB for analytics. You can say that it creates unnecessary complexity because that is true, but for us that abstraction makes it easier to separate out teams too. You have your front end devs, backend/ full stacks, and true backend devs. Not all of them need to be Django experts. [SPECIFIC ADVICE]: This sounds to me like you should just get started with mongo. Unless you are a B2B SaaS app selling to enterprise companies who won't like a multi-tenet data model then it shouldn't be tough to map this out in mongo. The main reason I say mongo is nice is because it sounds like you don't fully know the schema of what you'll collect ahead of time. Later you can refactor once you get a better handle of what data you collect. Expect to refactor and just get the thing working.
1
1
0
I’m building a web app (python/Django) where customers create an account, each customer creates/adds as many locations as they want and a separate server generates large amounts of data for each location several times a day. For example: User A -> [locationA, locationB] User B -> [locationC, locationD, locationE] Where each location is an object that includes name, address, etc. Every 3 hours a separate server gathers data from various sources like weather, check-ins etc for each location and I need to store each item from each iteration so I can then perform per-user-per-location queries. E.g. “all the checkins in the last week group by location for User A” Right now I am using MongoDB and storing a collection of venues with a field of ownerId which is the ObjectID of the owning user. What is the best strategy to store the records of data? The naïve approach seems to be a collection for checkins, a collection for weather records etc and each document would have a “location” field. But this seems to have both performance and security problems (all the access logic would be in web app code). Would it be better to have a completely separate DB for each user? Are there better ways? Is a different strategy better if we switch to Postgres/SQL database?
Right strategy for segmenting Mongo/Postgres database by customer?
0.197375
1
0
81
46,557,405
2017-10-04T05:07:00.000
0
0
1
0
python,string,python-3.x,exponentiation
46,557,897
1
false
0
0
As binary and infix operators, you could split the string into symbols (numbers, operators), assign priority to operators and then rearrange it into (prefix-notation-like) stack. Or split the input string into the parts separated by exponent mark, each number at the end-begining of neighbooring sub strings could then be cut, evaluated and replaced: "6 * 4^3 +2" -> ["6 * 4", "3 + 2"] -> "6 *" + x + "+ 2"
1
1
0
While working within python 3, I have created a calculator that accepts string inputs such as "-28 + 4.0/3 * 5" and other similar mathematical equations. As an exercise I had wanted to support exponentiation through use of the '^' key such that inputs like "5.23 * 2^4/3^2 -1.0" or other equations that contain values to a certain power would be functional. However, implementation with my current code has proven difficult. Not wanting to scrap my work, I realized that I could implement this if I could find a way to take the original string and selectively solve for the '^' operations such that inputs like the aforementioned "5.23 * 2^4/3^2 -1.0" would become "5.23 * 16/9 -1.0" which I could then feed into the code written prior. Only problem is, I am having some trouble isolating these pieces of the equations and was hoping someone might be able to lend a hand.
Python, exclusive exponentiation from a string input
0
0
0
77
46,558,735
2017-10-04T06:55:00.000
0
0
0
0
python,numpy
46,559,545
2
false
0
0
For that purpose you will need to implement the multiplication of polynomial, that is, you need to make sure your product is able to generate the product of (am * x^m + ... + a0) * (bn * x^n + ... + b0) If your product is able to do this, then knowing the roots of r1, ..., rk You can write this as (x - r1) * ... * (x - rk) and you need to repeatedly calculate the product here.
1
1
1
I'm trying to create a numpy.polynomial by the roots of the polynomial. I could only find a way to do that by the polynomial's a's The way it works now, for the polynomial x^2 - 3x + 2 I can create it like that: poly1d([1, -3, 2]) I want to create it by its roots, which are -1, -2
numpy - create polynomial by its roots
0
0
0
852
46,559,124
2017-10-04T07:18:00.000
1
0
0
1
php,python-3.x,hadoop,hdfs,bigdata
46,566,798
1
true
1
0
If your application is written in Java, this is easily possible using the DFS Client libraries which can read and write files in HDFS in a very similar way to a standard filesystem. Basically can open an input or output stream and read whatever data you want. If you are planning to use python to build the web application, then you could look at webHDFS, which provides a HTTP based API to put and get files from HDFS.
1
2
0
I have a requirement that I need to set up hadoop to save files not just text files it can be image video pdf. And there will be a web application from where user can add files and access files whenever its needed. Can it Possible to implement ? also the web application will need to develop by me. Thank You.
Can I access HDFS files from Custom web application
1.2
0
0
139
46,560,328
2017-10-04T08:28:00.000
0
0
1
0
python,python-3.x
56,409,117
4
false
0
0
I had faced similar problem. For some reason I wanted to change the PC admin but my Python was installed on the old user directory. All updates and repairs I had to do on the same directory. Then I deleted the python path from registry (Since I wanted to have fresh install later): Computer\HKEY_CURRENT_USER\SOFTWARE\Python and then reinstalled python. PS: While installing on your home PC its better to install across users. My python is installed at below location: C:\Program Files\Python37
2
9
0
I have installed Python 3.6.2 Windows in c:\users\username\AppData\Local\programs\Python\Python36 (because that is the (totally stupid) default. I have manually moved that into c:\ But the update to Python 3.6.3 still installs to the original target. How do I change this (without uninstalling (which would also uninstall all packages))?
How do I move a Python 3.6 installation a different directory?
0
0
0
20,300
46,560,328
2017-10-04T08:28:00.000
2
0
1
0
python,python-3.x
65,544,388
4
false
0
0
If you installed Python for all users, the registry path (64bit Python on 64bit OS) would be: HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.8\Idle HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.8\InstallPath HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.8\PythonPath HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.8\Help\Main Python Documentation HKEY_CLASSES_ROOT\Python.File\Shell\editwithidle\shell\edit38\command HKEY_CLASSES_ROOT\Python.NoConFile\Shell\editwithidle\shell\edit38\command
2
9
0
I have installed Python 3.6.2 Windows in c:\users\username\AppData\Local\programs\Python\Python36 (because that is the (totally stupid) default. I have manually moved that into c:\ But the update to Python 3.6.3 still installs to the original target. How do I change this (without uninstalling (which would also uninstall all packages))?
How do I move a Python 3.6 installation a different directory?
0.099668
0
0
20,300
46,560,385
2017-10-04T08:32:00.000
0
0
1
0
python,list,python-2.7,mutable
46,560,587
4
false
0
0
you could use list multiplication as for the complexity, it is very fast: s = ['a', 'b', 'c'] * i and then you can modify elements by index, pretty nett don't you think?
1
4
0
So I have a list [a,b,c] and I want to obtain [a,b,c,a,b,c,...a,b,c]. I can of course do this with two nested loops, but there must be a better way? itertools.cycle() would have been solution if I could provide a count. Two constraints: it should work in 2.7 (but for the sake of curiosity I'm interested in a 3.x solution) list elements should be independent copies (they are mutable types)
Best way to extend a list with itself N times
0
0
0
2,017
46,562,267
2017-10-04T10:10:00.000
0
0
0
0
python-3.x,rest,api,security,authentication
46,562,640
1
false
1
0
Put an API Gateway in front of your API , your API Gateway is publicly ( i.e in the DMZ ) exposed while the actual API are internal. You can look into Kong..
1
0
0
For the last few months i've been working on a Rest API for a web app for the company I work for. The endpoints supply data such as transaction history, user data, and data for support tickets. However, I keep running into one issue that always seems to set me back to some extent. The issue I seem to keep running into is how do I handle user authentication for the Rest API securely? All data is going to be sent over a SSL connection, but there's a part of me that's paranoid about potential security problems that could arise. As it currently stands when a client attempts to login the client must provide a username or email address, and a password to a login endpoint (E.G "/api/login"). Along with with this information, a browser fingerprint must be supplied through header of the request that's sending the login credentials. The API then validates whether or not the specified user exists, checks whether or not the password supplied is correct, and stores the fingerprint in a database model. To access any other endpoints in the API a valid token from logging in, and a valid browser fingerprint are required. I've been using browser fingerprints as a means to prevent token-hijacking, and as a way make sure that the same device used to login is being used to make the requests. However, I have noticed a scenario where this practice backfires on me. The client-side library i'm using to generate browser fingerprints isn't always accurate. Sometimes the library spits out a different fingerprint entirely. Which causes some client requests to fail as the different fingerprint isn't recognized by the API as being valid. I would like to keep track of what devices are used to make requests to the API. Is there a more consistent way of doing so, while still protecting tokens from being hijacked? When thinking of the previous question, there is another one that also comes to mind. How do I store auth tokens on client-side securely, or in a way that makes it difficult for someone to obtain the tokens through malicious means such as a xss-attack? I understand setting a strict Content-Security Policy on browser based clients can be effective in defending against xss-attacks. However, I still get paranoid about storing tokens as cookies or in local storage. I understand oauth2 is usually a good solution to user authentication, and I have considered using it before to deal with this problem. Although, i'm writing the API using Flask, and i'm also using JSON Web tokens. As it currently stands, Flask's implementation of oauth2 has no way to use JWTs as access tokens when using oauth for authentication. This is my first large-scale project where I have had to deal with this issue and i am not sure what to do. Any help, advice, or critiques are appreciated. I'm in need of the help right now.
How to handle Rest API user authentication securely?
0
0
1
260
46,565,209
2017-10-04T12:44:00.000
1
0
1
0
python,tkinter,label,character
46,565,877
1
true
0
1
No, it is not possible with either the Label orText widget. You can do it with the Canvas widget, however.
1
0
0
is there a way of placing character onto another character in python tkinter Label? If not is it possible with Text widget?
Place char on another char in tkinter Label widget
1.2
0
0
57
46,568,074
2017-10-04T14:59:00.000
0
1
0
1
python,amazon-web-services,lambda,glibc
49,010,419
2
false
0
0
It was related to Lambda Server lib problem.Solved by making zip in aws ec2 server by installing all python libraries there in ec2
2
0
0
/lib64/libc.so.6: version `GLIBC_2.22' not found (required by /var/task/pyhull/_pyhull.so). Not able to fix this error on Aws Lambda any help please ?
/lib64/libc.so.6: version `GLIBC_2.22' not found in Aws Lambda Server
0
0
0
3,248
46,568,074
2017-10-04T14:59:00.000
0
1
0
1
python,amazon-web-services,lambda,glibc
46,575,657
2
false
0
0
The /var/task/pyhull/_pyhull.so was linked against GLIBC-2.22 or later. You are running on a system with GLIBC-2.21 or earlier. You must either upgrade your AWS system, or get a different _pyhull.so build.
2
0
0
/lib64/libc.so.6: version `GLIBC_2.22' not found (required by /var/task/pyhull/_pyhull.so). Not able to fix this error on Aws Lambda any help please ?
/lib64/libc.so.6: version `GLIBC_2.22' not found in Aws Lambda Server
0
0
0
3,248
46,568,959
2017-10-04T15:42:00.000
2
0
1
0
python,spyder,pyflakes
46,574,680
4
false
0
0
You need to go to Tools > Preferences > Editor > Code Introspection/Analysis and deactivate the option called Real-time code analysis
2
3
0
The editor in Spyder always gives me warnings for unused imports/variables immediately after I type the line. I want to suppress such warnings. How do I do that? And I want this to happen for every file I open in the Spyder editor, wouldn't prefer local fixes. I tried adding 'disable=' in ~/.pylintrc and it didn't work. Moreover, the Spyder editor uses pyflakes anyway.
How to suppress a certain warning in Spyder editor?
0.099668
0
0
8,765
46,568,959
2017-10-04T15:42:00.000
-1
0
1
0
python,spyder,pyflakes
55,449,435
4
false
0
0
locate preferences -> Editor -> Code Introspection/Analysis -> Deactivate/Uncheck the Real time code analysis Warning: It will also stop showing errors in the editor
2
3
0
The editor in Spyder always gives me warnings for unused imports/variables immediately after I type the line. I want to suppress such warnings. How do I do that? And I want this to happen for every file I open in the Spyder editor, wouldn't prefer local fixes. I tried adding 'disable=' in ~/.pylintrc and it didn't work. Moreover, the Spyder editor uses pyflakes anyway.
How to suppress a certain warning in Spyder editor?
-0.049958
0
0
8,765
46,569,943
2017-10-04T16:37:00.000
3
0
0
0
python,random,seed
46,573,367
1
true
0
0
Let's say I would like to generate n > 10 ^ 20 numbers Let's say not. If you could generate a billion values per second, that would require 1E20 values / 1E9 values per second / 3600 seconds per hour / 24 hours per day / 365.25 days per year, which is more than 3000 years. Even if you have hardware and energy sources that reliable, you won't be there to see the outcome. using random.seed(SEED) and n subsequent calls to random.random() The results would be statistically indistinguishable from uniform because the underlying algorithm, Mersenne Twister, is designed to produce that behavior.
1
0
1
Let's say I would like to generate n > 10 ^ 20 numbers using random.seed(SEED) and n subsequent calls to random.random(). Is the generated sequence guaranteed to be uniformly distributed regardless of the chosen value of SEED?
Does every chosen seed for random.seed() guarantee that random will generate a uniformly distributed sequence?
1.2
0
0
124
46,570,466
2017-10-04T17:08:00.000
4
0
1
0
python,qt,qmake
46,570,647
2
true
0
1
qmake is the name of Qt's build tool, which generates Makefiles or other files needed to compile code on any platform that Qt supports. To fix this, you need to find out where the qmake executable lives on your system. If you know where the executable installed, just add that directory to your path. If you don't know, you'll have to find it somehow, by searching your computer.
1
5
0
I am attempting to get PyQt4 running on a windows 10 device with python 3.6.3. I have sips installed and built already in my python directory. However, when I run the configure.py/configure-ng.py file in the PyQt4 folder I get the following error: Error: Make sure you have a working Qt qmake on your PATH. I'm not sure how to fix this problem or what qmake is. I apprectiate any answers on how to fix this!
PyQt4 Error: Make sure you have a working Qt qmake on your PATH
1.2
0
0
13,360
46,571,076
2017-10-04T17:46:00.000
2
0
0
0
python,node.js,websocket,ipc
46,571,515
2
false
0
0
The question is purely opinion based but I will give it a shot anyway: WebSocket is an overkill imo. First of all in order to make WebSockets work you have to implement HTTP (or at least some basic form of it) to do the handshake. If you do that then it is better to stick to "normal" HTTP unless there's a reason for full-duplex communication. There are lots of tools everywhere to handle HTTP over (unix domain) sockets. But this might be an overkill as well. If you have workers then I suppose performance matters. The easiest and (probably) most efficient solution is a following protocol: each message starts with 1-8 (pick a number) bytes which determine the size of the following content. The content is anything you want, e.g. protobuf message. For example if you want to send foo, then you send 0x03 0x00 0x66 0x6f 0x6f. First two bytes correspond to the size of the content (being 3) and then 3 bytes correspond to foo.
2
3
0
I have a "main" process and a few "worker" processes between which I want to pass some messages. The messages could be binary blobs but has a fixed size for each. I want an abstraction which will neatly buffer and separate out each message for me. I don't want to invent my own protocol on top of TCP, and I can't find any simple+lightweight solution that is portable across languages. (As of now, the "main" process is a Node.js server, and the "worker" processes are planned to be in Python.)
Is using websockets a good idea for IPC?
0.197375
0
1
3,857
46,571,076
2017-10-04T17:46:00.000
0
0
0
0
python,node.js,websocket,ipc
46,571,589
2
false
0
0
It sounds like you need a message broker of some sort. Your requirement for “buffering” would exclude, for example, ZeroMQ (which is ideal for inter-process communication, but has no built-in message persistence). This leaves you with options such as RabbitMQ or SQS if you happen to be on AWS. You might also look at Kafka (Kinesis on AWS). These services all offer “buffering” of messages, with RabbitMQ offering the greatest range of configurations, but probably the greatest implementation hurdle. Another option would be to use Redis as a simple messaging service. There are a number options, all of which suit different use-cases and environments. I should add though that “simple and lightweight” doesn’t really fit with any solution other than - perhaps - ZeroMQ
2
3
0
I have a "main" process and a few "worker" processes between which I want to pass some messages. The messages could be binary blobs but has a fixed size for each. I want an abstraction which will neatly buffer and separate out each message for me. I don't want to invent my own protocol on top of TCP, and I can't find any simple+lightweight solution that is portable across languages. (As of now, the "main" process is a Node.js server, and the "worker" processes are planned to be in Python.)
Is using websockets a good idea for IPC?
0
0
1
3,857
46,571,485
2017-10-04T18:11:00.000
1
1
0
0
python,django,unit-testing,python-unittest,factory-boy
46,572,485
3
false
1
0
If you create a services.py file in the same folder as models.py, you can put the cleanup code in there and then call it from the management command and from the test tearDown while keeping it DRY.
1
2
0
I'm using the Django test runner to run my unit tests. Some of these tests use factories which create TONS of files on my local system. They all have a detectable name, and can be removed reasonably easily. I'm trying to avoid having to either Keep a file-deleting cron job running Change my custom image model's code to delete the file if it detects that we're testing. Instead, I'd like to have a command run once (and only once) at the end of the test run to clean up all files that the tests have generated. I wrote a small management command that deletes the files that match the intended convention. Is there a way to have the test runner run call_command on that upon completion of the entire test suite (and not just in the tearDown or tearDownClass methods of a particular test)?
Django/unittest run command at the end of the test runner
0.066568
0
0
1,122
46,572,148
2017-10-04T18:51:00.000
0
0
0
0
python,python-2.7,anaconda,spyder
46,593,691
2
false
0
0
You should try conda install requests>=2, so conda will take care of all dependencies and try to install a version of requests above 2.0.0
1
0
0
I am receiving the following error message in spyder. Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later. I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation
Error message in spyder, upgrade requests
0
0
1
192
46,572,569
2017-10-04T19:18:00.000
2
0
1
0
python,python-3.x
46,572,670
1
true
0
0
Local variables and function attributes are completely separate. Each call of a function creates a fresh scope with new, independent local variables, but a function only has one attribute namespace. Locals can be named the same thing as function attributes, but they'll still be distinct and separate. A module's global variable namespace is its __dict__. All module-globals are attributes of the module object and vice versa, unless the attribute is handled by a descriptor; for example, modules have a __class__ attribute that isn't a global, because __class__ is handled by a descriptor.
1
0
0
In Python3, for a function Can a variable defined in the local scope of a function not be an attribute of the function's function object? Conversely, can an attribute of a function's function object not be a variable in the local scope of the function Similarly, for a module: Must a variable defined in the global scope of a module be an attribute of the module's module object? Conversely, can an attribute of a module's module object be a variable in the global scope of the module? Thanks.
Variables in the scopes of a function/module and attributes of the function/module's objects
1.2
0
0
46
46,574,039
2017-10-04T20:56:00.000
3
0
1
0
python,ipython,spyder
46,574,716
3
false
0
0
(Spyder developer here) There's no option to do this, sorry.
2
4
0
How do I configure the IPython console in Spyder to auto-close brackets/braces/parens and quotes?
Autoclosing of brackets and quotes in Spyder IPython console?
0.197375
0
0
3,249
46,574,039
2017-10-04T20:56:00.000
0
0
1
0
python,ipython,spyder
54,300,627
3
false
0
0
Please select the check boxes at Tools > Preferences > Editor > Advanced Settings
2
4
0
How do I configure the IPython console in Spyder to auto-close brackets/braces/parens and quotes?
Autoclosing of brackets and quotes in Spyder IPython console?
0
0
0
3,249
46,574,611
2017-10-04T21:37:00.000
1
0
1
0
python,math,binary,integer,twos-complement
46,574,991
1
false
0
0
Python does not have a limit for the size on an integer. In Python 2, integers would automatically be converted to longs when they went past the limit for an INT. In Python 3, Integers have arbitrarily high precision.There is no defined limit in the language.
1
0
0
I'm trying to understand how Python 2.7.6 can output -4294967296 from a 32 bit integer. My assumptions on python integers are: Python does not have unsigned integers Python uses 2's complement If assumption 2 is true than in a 32 bit integer the max negative number should be -2147483648 (-2^31) since the MSB is resolved for the sign of the integer. Does Python stack on another 32 bits ( 32 bit + 32 bit) to make a 64 bit integer?
How does python output -4294967296 in 32 bits integer
0.197375
0
0
396
46,574,694
2017-10-04T21:45:00.000
0
0
0
0
python,mysql,bigdata,mysql-python
46,607,645
2
false
0
0
That's depend on what you have, you can use Apache spark and then use their SQL feature, spark SQL gives you the possibility to write SQL queries in your dataset, but for best performance you need a distributed mode(you can use it in a local machine but the result is limited) and high machine performance. you can use python, scala, java to write your code.
1
0
0
I have a large amount of data around 50GB worth in a csv which i want to analyse purposes of ML. It is however way to large to fit in Python. I ideally want to use mySQL because querying is easier. Can anyone offer a host of tips for me to look into. This can be anything from: How to store it in the first place, i realise i probably can't load it in all at once, would i do it iteratively? If so what things can i look into for this? In addition i've heard about indexing, would that really speed up queries on such a massive data set? Are there better technologies out there to handle this amount of data and still be able to query and do feature engineering quickly. What i eventually feed into my algorithm should be able to be done in Python but i need query and do some feature engineering before i get my data set that is ready to be analysed. I'd really appreciate any advice this all needs to be done on personal computer! Thanks!!
Storing and querying a large amount of data
0
1
0
589
46,575,847
2017-10-04T23:52:00.000
0
0
0
0
python,ruby,excel,xlsxwriter,axlsx
46,669,389
1
false
0
0
If you already have VBA that works for your project, then translating it to Ruby + WIN32OLE is probably your quickest path to working code. Anything you can do in VBA is doable in Ruby (if you find something you can't do, post here to ask for help). I prefer working with Excel via OLE since I know the file produced by Excel will work anywhere I open it. I haven't used axlsx but I'm sure it's a fine project; I just wouldn't trust that it would produce working Excel files every time.
1
0
1
I want to create a program, which automates excel reporting including various graphs in colours. The program needs to be able to read an excel dataset. Based on this dataset, the program then has to create report pages and graphs and then export to an excel file as well as pdf file. I have done some research and it seems this is possible using python with pandas - xlsxWriter or xlswings as well as Ruby gems - axlsx or win32ole. Which is the user-friendlier and easy to learn alternative? What are the advantages and disadvantages? Are there other options I should consider (I would like to avoid VBA - as this is how the reports are currently produced)? Any responses and comments are appreciated. Thank you!
Automating excel reporting and graphs - Python xlsxWriter/xlswings or Ruby axlsx/win32ole
0
1
0
388
46,579,249
2017-10-05T06:34:00.000
5
0
1
0
python-3.x,tensorflow
48,451,660
1
true
0
0
You need a 64 bit version of Python.
1
3
1
I am using python 3.6 on my pc(windows 10) I wanted to install tensor flow package (using pip), SO opened the cmd and typed the following as specified in the tensorflow website, i want to install the cpu package not the gpu package C:\Users\rahul>C:\Windows.old\Users\rahul\AppData\Local\Programs\Python\Python36-32\Scripts\ pip3.exe install --upgrade tensorflow but i get this error Collecting tensorflow Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow, How do i overcome this.
Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow
1.2
0
0
7,333
46,581,018
2017-10-05T08:24:00.000
0
0
1
0
python,pandas
46,582,120
1
false
0
0
I applied below command and it works: df['kategorie']=action['kategorie'].astype('category')
1
0
1
I have a data frame with one column full of string values. They need to be converted into categories. Due to huge amount it would be inconvenient to define categories in dictionary. Is there any other way in pandas to do that?
Converting many string values to categories
0
0
0
34
46,582,872
2017-10-05T10:00:00.000
0
0
0
0
python,hardware,pyserial
46,583,150
1
true
1
0
You are missing the communication protocol. I.e. What command you should send to get a proper response. So, dig through data-sheets or you will have to reverse-engineer the software you got with the device. Which, perhaps won't be legal if the licence doesn't allow you to use the device with any other program except the one you received etc, etc. If you cannot find the protocol specs in the data-sheets or on the internet, then install RS232 virtual card, make a loopback device so that you connect to one virtual port and the real port is connected to another and you can be a 'man in the middle' and see what data passes through when the software you got communicates with the device. Enjoy!
1
0
0
Currently I'm working with DMM DNY2, the hardware come together with software. The software can read available port, assign port for servo and read stored parameters in servo driver. Now, I'm trying to create python script to do the same as software, i can do for get and assign port, but can't get stored parameters in servo driver. each time doing read it return b''. Can someone help me, give me pointers what should I do or what I'm missing.
Read DMM DNY2 servo driver stored parameters using python
1.2
0
0
37
46,583,487
2017-10-05T10:31:00.000
0
0
1
0
python,data-structures,microcontroller,micropython
47,052,669
1
false
0
0
Sorry, but your question contains the answer - if you need to work with 32x32 tiles, the best format is that which represents your big image as a sequence of tiles (and e.g. not as one big 256x256 image, though reading tiles out of it is also not a rocket science and should be fairly trivial to code in MicroPython, though 32x32 tiles would be more efficient of course). You don't describe the exact format of your images, but I wouldn't use pickle module for it, but store images as raw bytes and load them into array.array() objects (using inplace .readinto() operation).
1
0
1
I'm writing some image processing routines for a micro-controller that supports MicroPython. The bad news is that it only has 0.5 MB of RAM. This means that if I want to work with relatively big images/matrices like 256x256, I need to treat it as a collection of smaller matrices (e.g. 32x32) and perform the operation on them. Leaving at aside the fact of reconstructing the final output of the orignal (256x256) matrix from its (32x32) submatrices, I'd like to focus on how to do the loading/saving from/to disk (an SD card in this case) of this smaller matrices from a big image. Given that intro, here is my question: Assuming I have a 256x256 on disk that I'd like to apply some operation onto (e.g. convolution), what's the most convenient way of storing that image so it's easy to load it into 32x32 image patches? I've seen there is a MicroPython implementation of the pickle module, is this a good idea for my problem?
Load portions of matrix into RAM
0
0
0
52
46,584,556
2017-10-05T11:31:00.000
1
0
1
0
python,tensorflow
46,584,984
1
true
0
0
Yes you can do. Easy step is install python anaconda then create environment with python 2.7 and python 3. Install Tensorflow for both environment
1
0
1
I've installed Tensorflow using Python 3 (pip3 install). Now, since Jupyter Notebook is using Python 2, thus the python command is linked to python2.7, all the codes in Jupyter Notebook get error (ImportError: No module named tensorflow). Question: Can I install Tensorflow running side by side for both Python 2 and 3?
Can I install Tensorflow on both Python 2 and 3?
1.2
0
0
948
46,585,670
2017-10-05T12:29:00.000
2
0
0
1
google-app-engine,firebase,google-app-engine-python,google-cloud-python,google-cloud-firestore
46,625,646
1
true
1
0
The Cloud Firestore server-side client libraries are not optimized for App Engine Standard. They don't integrate with a caching solution like GAE's memcache; you'd have to write that layer yourself.
1
1
0
Setup: Google App Engine application on Python standard environment. Currently, the app uses the NDB library to read/write from its Datastore. It uses async tasklets for parallel, asynchronous reads from Datastore, and memcache. If I would like to use Firestore as a replacement for Datastore, it seems that I would have to use the Google Cloud Client Library for Python. I believe that the google-cloud lib doesn't support a mechanism like tasklets. But I wonder: Does the lib use a thread-safe cache-mechanism for its requests to the Firestore API, and maybe even GAE's memcache?
Does Cloud Python lib in GAE use caching or memcache for access to Cloud Firestore data?
1.2
0
0
217
46,586,465
2017-10-05T13:09:00.000
1
1
1
0
python,python-3.x,python-import,relative-path
46,771,938
1
true
0
0
According to the documentation, I need to add the package name in front of the (.). So an (import .module) should be (import filename.module). Statements like (from . import something) can change to (import filename.module.something as something)
1
0
0
I don't understand the following from pep-0404 In Python 3, implicit relative imports within packages are no longer available - only absolute imports and explicit relative imports are supported. In addition, star imports (e.g. from x import *) are only permitted in module level code. What is a relative import? I have lines that import like this From . Import " something" Why is it just a dot?
Relative vs explicit import upgrading from python 2
1.2
0
0
65
46,587,226
2017-10-05T13:44:00.000
2
0
1
0
python,spyder
56,422,267
6
false
0
0
Installation Spyder Install Spyder and its other dependencies, run pip install spyder. You may need to install a Qt binding (PyQt5) separately with pip if running under Python 2 Launch Spyder To launch Sypder go to your Python installation directory. in my case C:\Program Files (x86)\Python\Scripts Launch spyder3.exe
2
27
0
I already have Python 3.6 (32-bit) on Windows 7. Is there a way to install Spyder without downloading Anaconda, WinPython, etc. ?
Spyder installation without Anaconda
0.066568
0
0
50,264
46,587,226
2017-10-05T13:44:00.000
2
0
1
0
python,spyder
69,653,669
6
false
0
0
They’ve recently provided stand-alone installers for Mac & Windows, including a “lite” version.
2
27
0
I already have Python 3.6 (32-bit) on Windows 7. Is there a way to install Spyder without downloading Anaconda, WinPython, etc. ?
Spyder installation without Anaconda
0.066568
0
0
50,264
46,592,760
2017-10-05T18:46:00.000
0
0
0
0
python,csv,slack,slack-api
46,597,662
2
false
0
0
you can solve using pandas from python pandas is data processing framework pandas framework can processing EXCEL, TXT as well as csv file. The following links pandas documentation
2
0
1
I have recently been working on a slackbot and I have the basic functionality down, I am able to take simple commands and make have the bot answer. But, I want to know if there is anyway to have to bot store some data given by a user, such as "@slackbot 5,4,3,2,1" and then have the bot sort it and return it like "1,2,3,4,5". Also, is there anyway to have the bot read an external .csv file and have it return some type of information? for example I want the bot to tell me what the first row of a .csv file says. Thank you! any help would be appreciated
Programming an interactive slackbot - python
0
0
0
158
46,592,760
2017-10-05T18:46:00.000
0
0
0
0
python,csv,slack,slack-api
56,807,453
2
false
0
0
Whatever you have mentioned in your question, can easily be done using slackbot. You can develop slackbot as Django server. If you want bot to store data, you can connect your django server to any database or to any cache (eg: Redis, Memecache). You can write sorting logic in python and send sorted list back to slack using Slackclient library. And based on your input to slackbot, you can perform action in python and send response back to slack. Hope this answers!
2
0
1
I have recently been working on a slackbot and I have the basic functionality down, I am able to take simple commands and make have the bot answer. But, I want to know if there is anyway to have to bot store some data given by a user, such as "@slackbot 5,4,3,2,1" and then have the bot sort it and return it like "1,2,3,4,5". Also, is there anyway to have the bot read an external .csv file and have it return some type of information? for example I want the bot to tell me what the first row of a .csv file says. Thank you! any help would be appreciated
Programming an interactive slackbot - python
0
0
0
158
46,594,866
2017-10-05T21:11:00.000
1
0
0
0
python,sql,database,orm,sqlalchemy
46,777,010
2
true
1
0
After asking around in #sqlalchemy IRC, it was pointed out that this could be done using ORM-level relationships in an before_flush event listener. It was explained that when you add a mapping through a relationship, the foreign key is automatically filled on flush, and the appropriate insert statement generated by the ORM.
1
1
0
So I have two table in a one-to-many relationship. When I make a new row of Table1, I want to populate Table2 with the related rows. However, this population actually involves computing the Table2 rows, using data in other related tables. What's a good way to do that using the ORM layer? That is, assuming that that the Table1 mappings are created through the ORM, where/how should I call the code to populate Table2? I thought about using the after_insert hook, but i want to have a session to pass to the population method. Thanks.
Populating related table in SqlAlchemy ORM
1.2
1
0
601
46,596,490
2017-10-06T00:09:00.000
1
0
1
0
python
46,596,562
1
true
0
0
There are occasional cases where you'd want to put imports elsewhere (such as when imports have side-effects that you need to invoke in a certain order), but in this case, all your imports should go at the top of the .py source file like a table of contents. If you feel like your file is getting too cluttered, break out each class and the relevant imports into new source files.
1
0
0
I see in PEP8 that the accepted best practice for importing is to put all imports at the top of the module. I am wondering if this is still the case if you want to have multiple subclasses with different import needs all inside the same module. Specifically, I am making a generic DataConnector class to read data from different sources and then put that data into a pandas dataframe. I will have subclasses that read the different sources of data. For instance, one subclass will be CsvConnector(DataConnector), another PGdatabaseConnector(DataConnector). The Csv subclass will need to import csv and the PGdatabase class will need to import psycopg2. Is the best practice still to keep all the imports at the top of the entire module? (Logically it seem that all the classes should be contained in one module, but I could also see putting them all in different modules and then I wouldn't have to worry about importing libraries that wouldn't be used.)
Best practice for multiple classes inside module that need different imports?
1.2
0
0
185
46,596,636
2017-10-06T00:31:00.000
4
0
0
0
python,tensorflow
46,597,146
7
true
0
0
Rounding is a fundamentally nondifferentiable function, so you're out of luck there. The normal procedure for this kind of situation is to find a way to either use the probabilities, say by using them to calculate an expected value, or by taking the maximum probability that is output and choose that one as the network's prediction. If you aren't using the output for calculating your loss function though, you can go ahead and just apply it to the result and it doesn't matter if it's differentiable. Now, if you want an informative loss function for the purpose of training the network, maybe you should consider whether keeping the output in the format of probabilities might actually be to your advantage (it will likely make your training process smoother)- that way you can just convert the probabilities to actual estimates outside of the network, after training.
1
6
1
So the output of my network is a list of propabilities, which I then round using tf.round() to be either 0 or 1, this is crucial for this project. I then found out that tf.round isn't differentiable so I'm kinda lost there.. :/
Differentiable round function in Tensorflow?
1.2
0
0
6,120
46,598,682
2017-10-06T05:16:00.000
1
0
0
0
linux,windows,python-3.x,tkinter
46,601,511
1
true
0
1
You need to invoke Tk's mainloop. Add tk.mainloop() at the end of your code. Besides that, I suggest you use import tkinter as tk (in which case you would have to rename your variable to something else (suggestion: root is kind of idiomatic)) instead of from tkinter import *, since you don't know what names that imports. It can replace names you imported earlier, and it makes it very difficult to see where names in your program are supposed to come from.
1
0
0
I tried the following program in Win10, it works But I want to use it in linux mint and it display nothing (it display a window with a button on my win10) from tkinter import * tk=Tk() btn= Button(tk,text="ttk") btn.pack() I want it display a window with a button on my linux mint
How can I use python 3 tkinter in Linux?
1.2
0
0
395
46,599,013
2017-10-06T05:44:00.000
0
0
1
0
python,list,dictionary,key-value
46,599,399
6
false
0
0
class is used when there is both state and behavior . dict is used when there's only state. since you have both use class.
2
1
0
I'm a beginner at programming (and also an older person), so I don't really know how to express what I want correctly. I've tried to describe my question as thoroughly as possible, so really appreciate your patience! I would like to store the winning scores associated with each user. Each user would have different number of winning scores. I do not need to seperate users first name and last name, they can all be one string. I do not need the scores to be ordered in any way, and I don't need to be able to change them. I only need to be able to add the scores whenever the user wins and sort users by amount of wins. I will extract all the winning scores for statistics, but the statistics will not be concerned with what score belongs to what user. Once the program stops, it can all be erased from memory. From what I've researched so far it seems my best options are to either create a user class, where I store a list and add to the list each time. Or to create a dictionary with a key for each user. But since each user may have a different amount of winning scores, I don't know if I can use dictionaries with that (unless each key is associated with a list maybe?). I don't think I need something like a numpy array, since I want to create very simple statistics without worrying about what score belongs to what user. I need to think about not using an unnecessary amount of memory etc., especially because there may be a hundred winning scores for each user. But I can't really find clear information on what the benefits of dictionaries vs classes are. The programming community is amazingly helpful and full of answers, but unfortunately I often don't understand the answers. Greatful for any help I can get! And don't be afraid to tell me my ideas are dumb, I want to learn how to think like a programmer.
Beginners python. Best way to store different amount of items for each key?
0
0
0
69
46,599,013
2017-10-06T05:44:00.000
1
0
1
0
python,list,dictionary,key-value
46,600,049
6
false
0
0
You can use Dictionary as values in dicts can be mutable like a list where you can keep all the scores/winning scores for each user. {'player1' : [22,33,44,55], 'player2' : [23,34,45], ..... } If this is not an exercise that you will repeat dicts make sense but if it is an exercise that might need to be done again in future Classes are better alternative as explained in other answers by Stuart and Hallsville3. Hope it helps!
2
1
0
I'm a beginner at programming (and also an older person), so I don't really know how to express what I want correctly. I've tried to describe my question as thoroughly as possible, so really appreciate your patience! I would like to store the winning scores associated with each user. Each user would have different number of winning scores. I do not need to seperate users first name and last name, they can all be one string. I do not need the scores to be ordered in any way, and I don't need to be able to change them. I only need to be able to add the scores whenever the user wins and sort users by amount of wins. I will extract all the winning scores for statistics, but the statistics will not be concerned with what score belongs to what user. Once the program stops, it can all be erased from memory. From what I've researched so far it seems my best options are to either create a user class, where I store a list and add to the list each time. Or to create a dictionary with a key for each user. But since each user may have a different amount of winning scores, I don't know if I can use dictionaries with that (unless each key is associated with a list maybe?). I don't think I need something like a numpy array, since I want to create very simple statistics without worrying about what score belongs to what user. I need to think about not using an unnecessary amount of memory etc., especially because there may be a hundred winning scores for each user. But I can't really find clear information on what the benefits of dictionaries vs classes are. The programming community is amazingly helpful and full of answers, but unfortunately I often don't understand the answers. Greatful for any help I can get! And don't be afraid to tell me my ideas are dumb, I want to learn how to think like a programmer.
Beginners python. Best way to store different amount of items for each key?
0.033321
0
0
69
46,600,280
2017-10-06T07:17:00.000
0
0
1
0
python,python-3.x,anaconda,spyder
46,607,726
1
false
0
0
The problem is you have two Python versions installed in your system: one in C:\ProgramData\Anaconda3\ and the other in C:\Users\Jaker\AppData\Roaming\Python\Python36. Please uninstall one of them and the problem will be solved.
1
0
0
Tried anaconda today , it seems fine but when I tried to launch Spyder each time I get this error: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\Scripts\spyder-script.py", line 6, in <module> from spyder.app.start import main File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\app\start.py", line 23, in <module> from spyder.utils.external import lockfile File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\external\lockfile.py", line 22, in <module> import psutil File "C:\Users\Jaker\AppData\Roaming\Python\Python36\site-packages\psutil\__init__.py", line 126, in <module> from . import _pswindows as _psplatform File "C:\Users\Jaker\AppData\Roaming\Python\Python36\site-packages\psutil\_pswindows.py", line 16, in <module> from . import _psutil_windows as cext ImportError: cannot import name '_psutil_windows' Any help regarding this ? Also how do I get python 3.6.3 in anaconda..?
Error installing Spyder in anaconda
0
0
0
927
46,600,652
2017-10-06T07:37:00.000
0
0
0
0
python,csv
46,600,856
2
false
0
0
You can do this by using multiple CSV files - one CSV file per sheet. A comma-separated value file is a plain text format. It is only going to be able to represent flat data, such as a table (or a "sheet") When storing multiple sheets, you should use separate CSV files. You can write each one separately and import/parse them individually into their destination.
1
0
1
Is there is way to create sheet 2 in same csv file by using python code
How can I create sheet 2 in a CSV file by using Python code?
0
0
0
4,260
46,604,245
2017-10-06T11:00:00.000
0
0
0
0
webdriver,appium,python-appium
47,365,628
2
false
0
0
I'm interested what function you added, because Appium server does not support device power on/off out of box, the only way you can do it is to use adb directly
2
0
0
I am trying to Power on/off a device using appium. I modified Webdriver.py in Appium python client. I added a function and command for power off. Its not working. Can anyone help me with Appium commands for power on/off. PS - I can not use adb commands
Power on/off using Appium commands
0
0
1
877
46,604,245
2017-10-06T11:00:00.000
0
0
0
0
webdriver,appium,python-appium
46,726,929
2
false
0
0
adb shell reboot -p power off device adb reboot -p restart the device
2
0
0
I am trying to Power on/off a device using appium. I modified Webdriver.py in Appium python client. I added a function and command for power off. Its not working. Can anyone help me with Appium commands for power on/off. PS - I can not use adb commands
Power on/off using Appium commands
0
0
1
877
46,605,599
2017-10-06T12:19:00.000
1
0
0
0
python,python-telegram-bot
46,637,593
1
false
0
0
To be exact you can find the message object (or/and the text) of a callback_query at update.callback_query.message.text. Or for convenience you can always use update.effective_chat, update.effective_message and update.effective_user to access the chat, message and from_user objects wherever they are in (no matter it's a normal message, a callback_query, an inline_query etc.)
1
0
0
update.callback_query.from_user Inside the same function I used update.message.text where i tried to get message from user update.message.text is not working giving me 'NoneType' object has no attribute 'text' how can i use two update in same function
I used Update.callback_query after which i used Update.message in same function its not working
0.197375
0
0
1,504
46,608,223
2017-10-06T14:36:00.000
0
0
0
0
python,sorting,amazon-redshift,pandas-to-sql
46,610,485
1
true
0
0
While ingesting data into redshift, data gets distributed between slices on each node in your redshift cluster. My suggestion would be to create a sort key on a column which you need to be sorted. Once you have sort key on that column, you can run Vacuum command to get your data sorted. Sorry! I cannot be of much help on Python/Pandas If I’ve made a bad assumption please comment and I’ll refocus my answer.
1
0
1
I've built some tools that create front-end list boxes for users that reference dynamic Redshift tables. New items in the table, they appear automatically in the list. I want to put the list in alphabetical order in the database so the dynamic list boxes will show the data in that order. After downloading the list from an API, I attempt to sort the list alphabetically in a Pandas dataframe before uploading. This works perfectly: df.sort_values(['name'], inplace=True, ascending=True, kind='heapsort') But then when I try to upload to Redshift in that order, it loses the order while it uploads. The data appears in chunks of alphabetically ordered segments. db_conn = create_engine('<redshift connection>') obj.to_sql('table_name', db_conn, index = False, if_exists = 'replace') Because of the way the third party tool (Alteryx) works, I need to have this data in alphabetical order in the database. How can I modify to_sql to properly upload the data in order?
Sorting and loading data from Pandas to Redshift using to_sql
1.2
1
0
725
46,611,612
2017-10-06T18:09:00.000
1
0
0
0
python,cntk
46,624,607
1
true
0
0
The prefetching functionality is part of the Deserializer classes in C++. Therefore, prefetching will not be available for custom data unless you write some C++ code.
1
2
0
If I implement an UserMinibatchSource in python, will the minibatch data be prefetched when training?
CNTK and UserMinibatchSource prefetch
1.2
0
0
54
46,616,153
2017-10-07T02:40:00.000
0
0
0
0
python,django,ubuntu,server,virtual-machine
46,616,491
4
false
1
0
It really depends on your requirements. Will you be accessing the website externally (making it public) or locally? Running Django from your laptop can work but if you are planning to make it public, you will need an external IP to point your domain to. Unless you have a business account, ISPs usually don't give static IPs to individual customers. Ubuntu would be a wise choice and you can run conda or virtualenv easily. VPS are quite cheap these days. You can look into AWS free tier that provides you with 500 hours/month on a micro server. If you are planning to access your website internally then you don't need anything other than your laptop or perhaps raspberry pi. If you are trying to make it available for everyone on the external network, VPS would be the best bet.
3
0
0
I've been wanting to run my own server for a while and I figured that running one for my django website would be a good start. What do you recommend I use for this? I've been trying to use a Ubuntu Virtual Machine to run it on one of my old laptops that I don't really use anymore until I can buy a dedicated server. Should I run it from a Virtual Machine? If so, would Ubuntu be best? That appears to be the case, but I want to be sure before I invest in anything. I want to be able to access the website from other computers, just like any other website. Am I going about this wrong? If so, what can you suggest me?
How to run a Django Project on a Personal Run Server
0
0
0
873
46,616,153
2017-10-07T02:40:00.000
0
0
0
0
python,django,ubuntu,server,virtual-machine
46,617,161
4
true
1
0
Yes, you will need a static IP address. If this is your first experiment, my advice would be: 1) Use an old, dedicated PC with no other stuff on it. Unless you do it just right, you should presume hackers could get anything on the disk... 2) Why make life complex with layer after layer of software? Install Ubuntu and run a standard server under a Unix OS 3) Be very careful about the rest of your attached network. Even if the PC is dedicated, unless you properly managed port forwarding, etc., ALL of your computers could be susceptible to attack. An old friend of mine discovered, back in the Napster peer-to-peer days, that he could basically go and read EVERYTHING on the hard drives of most people who had set up Napster on their computer.
3
0
0
I've been wanting to run my own server for a while and I figured that running one for my django website would be a good start. What do you recommend I use for this? I've been trying to use a Ubuntu Virtual Machine to run it on one of my old laptops that I don't really use anymore until I can buy a dedicated server. Should I run it from a Virtual Machine? If so, would Ubuntu be best? That appears to be the case, but I want to be sure before I invest in anything. I want to be able to access the website from other computers, just like any other website. Am I going about this wrong? If so, what can you suggest me?
How to run a Django Project on a Personal Run Server
1.2
0
0
873
46,616,153
2017-10-07T02:40:00.000
0
0
0
0
python,django,ubuntu,server,virtual-machine
46,618,880
4
false
1
0
As already stated Ubantu is a good choice but there is also Debian. I use Debian because I started off working with a colleague who was already using it and I find it very good. I began with an old, disused desktop PC which I nuked and turned into a proper linux server. For development I didn't need a very high spec machine. (Think it has 1 GB ram) I have it set up in my flat and my domestic internet connection is fine for most of my needs. Note: It isn't necessary to have a static IP address for development, although it is preferable if you already have one. As an alternative you can use a service such as dnydns.org where you can set up virtual domain names that point to your domestic dynamic IP address. Most routers these days have facilities within them for updating services like dyndns.org with your new dynamic IP address or you can install a plug-in to your server that will do this for you. All my projects have their own virtualenvs and I have VNCServer installed so I can access my server and work from anywhere where I have an internet connection. I've been running this way for the past three years with some household name clients and haven't had any issues at all. When it comes to production you can simply use any of the many VPS services that are out there. Amazon has already been mentioned. Someone recommended creating a droplet at DigitalOcean.com as I was wanting to host django applications and I find them to be very good and cost effective. Anyway just my 2 cents worth...hope it helps
3
0
0
I've been wanting to run my own server for a while and I figured that running one for my django website would be a good start. What do you recommend I use for this? I've been trying to use a Ubuntu Virtual Machine to run it on one of my old laptops that I don't really use anymore until I can buy a dedicated server. Should I run it from a Virtual Machine? If so, would Ubuntu be best? That appears to be the case, but I want to be sure before I invest in anything. I want to be able to access the website from other computers, just like any other website. Am I going about this wrong? If so, what can you suggest me?
How to run a Django Project on a Personal Run Server
0
0
0
873
46,618,762
2017-10-07T09:41:00.000
1
0
0
1
python,database,amazon-web-services,amazon-redshift
46,640,656
3
false
1
0
The 2 options for running ETL on Redshift Create some "create table as" type SQL, which will take your source tables as input and generate your target (transformed table) Do the transformation outside of the database using an ETL tool. For example EMR or Glue. Generally, in an MPP environment such as Redshift, the best practice is to push the ETL to the powerful database (i.e. option 1). Only consider taking the ETL outside of Redshift (option 2) where SQL is not the ideal tool for the transformation, or the transformation is likely to take a huge amount of compute resource. There is no inbuilt scheduling or orchestration tool. Apache Airflow is a good option if you need something more full featured than cron jobs.
1
1
0
I'm working with a small company currently that stores all of their app data in an AWS Redshift cluster. I have been tasked with doing some data processing and machine learning on the data in that Redshift cluster. The first task I need to do requires some basic transforming of existing data in that cluster into some new tables based on some fairly simple SQL logic. In an MSSQL environment, I would simply put all the logic into a parameterized stored procedure and schedule it via SQL Server Agent Jobs. However, sprocs don't appear to be a thing in Redshift. How would I go about creating a SQL job and scheduling it to run nightly (for example) in an AWS environment? The other task I have involves developing a machine learning model (in Python) and scoring records in that Redshift database. What's the best way to host my python logic and do the data processing if the plan is to pull data from that Redshift cluster, score it, and then insert it into a new table on the same cluster? It seems like I could spin up an EC2 instance, host my python scripts on there, do the processing on there as well, and schedule the scripts to run via cron? I see tons of AWS (and non-AWS) products that look like they might be relevant (AWS Glue/Data Pipeline/EMR), but there's so many that I'm a little overwhelmed. Thanks in advance for the assistance!
AWS Redshift Data Processing
0.066568
1
0
2,154
46,619,531
2017-10-07T11:16:00.000
4
0
0
0
python,c++,linux,opencv
46,619,774
2
true
0
0
OK, this is not exactly a memory sharing in its real sense. What you want is IPC to send image data from one process to another. I suggestthat you use Unix named pipes. You will have to get the raw data in a string format in C/C++, send it through pipe or Unix socket to Python and there get a numpy array from the sent data. Perhaps using np.fromstring() function. Do not worry about the speed, pipes are pretty fast. Local and Unix sockets as well. Most time will be lost on getting the string representation and turning it back to matrix. There is a possibility that you can create real shared memory space and get the data from OpenCV in C/C++ directly into Python, and then use OpenCV in Python to get out numpy array, but it would be complicated. If you don't need speed of light your best bet are named pipes.
1
7
1
Is there a way to share memory to share an openCV image (MAT in C+++ and numpy in python) image between a C/C++ and python? Multiplataform is not needed, I'm doing it in linux, I've thought share between mmap or similar think. I have two running processes one is written in C and the other is python, and I need to share an image between them. I will call from the c process to python via socket but I need to send and image and via memory. Another alternative could be write in memory file, not sure if it could be more time consuming.
Share memory between C/C++ and Python
1.2
0
0
5,634
46,620,657
2017-10-07T13:23:00.000
1
0
1
0
python,random
46,620,696
1
false
0
0
1 - Start with a list initialized with 5 items (maybe None?) 2 - place the walker at index 2 3 - randomly chose a direction (-1 or + 1) 4 - move the walker in the chosen direction 5 - maybe print the space and mark the location of the walker 6 - repeat at step 3 as many times as needed
1
0
1
Start with a one dimensional space of length m, where m = 2 * n + 1. Take a step either to the left or to the right at random, with equal probability. Continue taking random steps until you go off one edge of the space, for which I'm using while 0 <= position < m. We have to write a program that executes the random walk. We have to create a 1D space using size n = 5 and place the marker in the middle. Every step, move it either to the left or to the right using the random number generator. There should be an equal probability that it moves in either direction. I have an idea for the plan but do not know how to write it in python: Initialize n = 1, m = 2n + 1, and j = n + 1. Loop until j = 0 or j = m + 1 as shown. At each step: Move j left or right at random. Display the current state of the walk, as shown. Make another variable to count the total number of steps. Initialize this variable to zero before the loop. However j moves always increase the step count. After the loop ends, report the total steps.
How to use random numbers that executes a one dimensional random walk in python?
0.197375
0
0
327
46,621,609
2017-10-07T14:59:00.000
0
0
0
0
python,django,architecture,django-rest-framework,cloudflare
46,624,369
1
true
1
0
This sounds like a single project that is being split as part of the deployment strategy. So it makes sense to use just a single codebase for it, rather than splitting it into two projects. If that's the case, re-use is a non-issue since both servers are using the same code. To support multiple deployments, then, you just create two settings files and load the appropriate one on the appropriate server. The degree to which they are different is up to you. If you want to support different views you can use different ROOT_URLCONF settings. The INSTALLED_APPS can be different, and so forth.
1
1
0
I currently have a Django project running on a server behind Cloudflare. However, a number of the apps contain functionality that requires synchronizing data with certain web services. This is a security risk, because these web services may reveal the IP address of my server. Therefore I need a solution to prevent this. So far I came up two alternatives: using a proxy or splitting the project to two servers. One server responsible for responding to requests through Cloudflare and one server responsible for synchronizing data with other web services. The IP address of the latter server will be exposed to the public, however attacks on this server will not cause the website to be offline. I prefer the second solution, because this will also split the load between two servers. The problem is that I do not know how I should do this with Django without duplicating code. I know I can re-use apps, but for most of them counts that I, for instance, only need the models and the serializers and not the views etc. How should I solve this? What is the best approach to take? In addition, what is an appropriate naming for the two servers? Thanks
django - split project to two servers (due to Cloudflare) without duplicating code
1.2
0
0
68
46,621,774
2017-10-07T15:14:00.000
1
0
1
0
python,machine-learning,deep-learning,neural-network,epoch
58,870,147
2
true
0
0
Epoch: One round forward Propagation and backward Propagation into the neural network at once.(dataset ) Example : One round of throwing the ball into the basket and finding out the error and come back and changing the weights.(f = ma) Forward propagation: The Process of initizing the mass and acceleration with random values and predicting the output is called the forward propagation. Backward propagation: Changing the values and again predicting the output .(By finding out the gradient) Gradient: If i change the input of X(Independent variable) then what the value of y(Dependent variable) is changed out is called the gradient. Actually there is no answer for that . And the epochs are based on the dataset but you can say that the numbers of epochs is related to how different your data is. With an example, Do you have the only white tigers in your dataset or is it much more different dataset. Iteration: Iteration is the number of batches needed to complete one epoch. Example: We can divide the dataset of 1000 examples into batches of 250 then it will take 4 iterations to complete 1 epoch. (Here Batch size = 250, iteration = 4)
2
3
1
What does the term epochs mean in the neural network. How does it differ from pass and iteration
Epochs Vs Pass Vs Iteration
1.2
0
0
971
46,621,774
2017-10-07T15:14:00.000
2
0
1
0
python,machine-learning,deep-learning,neural-network,epoch
53,937,484
2
false
0
0
There are many neural networks algorithms in unsupervised learning. As long as a cost function can be defined, so can "neural networks" be used. For instance, there are for instance autoencoders, for dimensionality reduction, or Generative Adversarial Networks (so 2 networks, one generating new samples). All these are unsupervised learning, and still using neural networks.
2
3
1
What does the term epochs mean in the neural network. How does it differ from pass and iteration
Epochs Vs Pass Vs Iteration
0.197375
0
0
971
46,622,112
2017-10-07T15:46:00.000
0
0
0
0
python,apache,mod-wsgi,windows-server-2008-r2
46,645,404
1
true
1
0
The issue was that the Apache was built with VC14, but Python 2.7 naturally with VC9. Installing an Apache built with VC9 solved my issue.
1
0
0
I successfully installed mod_wsgi via pip install mod_wsgi on Windows. However, when I copy the output of mod_wsgi-express module-config into my httpd.conf and try to start the httpd, I get the following error: httpd.exe: Syntax error on line 185 of C:/path/to/httpd.conf: Cannot load c:/path/to/venv/Lib/site-packages/mod_wsgi/server/mod_wsgi.pyd into server This is already after correcting the pasted output of module-config, as it was .../venv/lib/site-packages/mod_wsgi/server/mod_wsgiNone (note the "None"). I changed the "None" to ".pyd" as this is the correct path. I already tried to install it outside the virtual env (Python being at C:\Python27), but it didn't make a difference -> same error. I also tried to uninstall/re-install mod_wsgi. I had one failed install as Microsoft Visual C++ Compiler for Python 2.7 (Version 9.0.0.30729) was not present. After that installation, the mod_wsgi always installed OK. The apache (Apache/2.4.27 (Win32)) comes from the xampp package and starts without issues when I remove the added lines for wsgi. I need to use Python 2.7 because of a third-party module. So going for 3.x is unfortunately not an option at the moment. Exact Python version is 2.7.13 (32-bit). For completeness, the output of module-config is: LoadModule wsgi_module "c:/www/my_project/venv/lib/site-packages/mod_wsgi/server/mod_wsgiNone" WSGIPythonHome "c:/www/my_project/venv" Update: tried one more thing: Uninstalled mod_wsgi (with pip) set "MOD_WSGI_APACHE_ROOTDIR=C:/WWW/apache" And pip install mod_wsgi again Still the same error...
Getting mod_wsgi to work with Python 2.7/Apache on Windows Server 2012; cannot load module
1.2
1
0
1,202
46,624,822
2017-10-07T20:31:00.000
0
0
0
0
python,ibm-cloud,ibm-watson,watson-nlu
46,929,384
1
true
1
0
NLU can be "manually" adapted to do batch analysis. But the Watson service that provides what you are asking for is Watson Discovery. It allows to create Collections (set of documents) that will be enriched thru an internal NLU function and then queried.
1
0
1
I have roughly 200 documents that need to have IBM Watson NLU analysis done. Currently, processing is performed one at a time. Will NLU be able preform a batch analysis? What is the correct python code or process to batch load the files and then response results? The end goal is to grab results to analyze which documents are similar in nature. Any direction is greatly appreciated as IBM Support Documentation does not cover batch processing.
IBM Watson Natural Language Understanding uploading multiple documents for analysis
1.2
0
0
445
46,624,831
2017-10-07T20:32:00.000
0
0
0
0
javascript,python,svg,libraries
46,625,764
4
false
0
0
Try to use Pygal. It's used for creating interactive .svg pictures.
2
2
0
I am trying to use python and understand SVG drawings. I would like python to behave similar to java script and get information from SVG. I understand that there can be 2 types of information in SVG. XML based information - such as elementbyID, elementbyTagNames Structural information - positional information taking transformations in to consideration too - such as getelementfrompoint, getboundingbox I have searched around and found python libraries such as lxml for xml processing in svg. Also I found libraries such as svgpathtools, svg.path , but as I understand, these deal only with svgpath elements. So my question is, Are there any good libraries which support processing svg in python?(similar to java script)
Processing SVG in Python
0
0
1
1,202
46,624,831
2017-10-07T20:32:00.000
-1
0
0
0
javascript,python,svg,libraries
46,625,377
4
false
0
0
Start your search by visiting www.pypi.org and search for "svg". Review what exists and see what suits your needs.
2
2
0
I am trying to use python and understand SVG drawings. I would like python to behave similar to java script and get information from SVG. I understand that there can be 2 types of information in SVG. XML based information - such as elementbyID, elementbyTagNames Structural information - positional information taking transformations in to consideration too - such as getelementfrompoint, getboundingbox I have searched around and found python libraries such as lxml for xml processing in svg. Also I found libraries such as svgpathtools, svg.path , but as I understand, these deal only with svgpath elements. So my question is, Are there any good libraries which support processing svg in python?(similar to java script)
Processing SVG in Python
-0.049958
0
1
1,202
46,627,188
2017-10-08T03:15:00.000
0
0
1
1
python,dask,dask-distributed
46,631,210
1
true
0
0
Usually the solution to having different user environments is to launch and destroy networks of different Dask workers/schedulers on the fly on top of some other job scheduler like Kubernetes, Marathon, or Yarn. If you need to reuse the same set of dask workers then you could also be careful about specifying the workers= keyword consistently, but this would be error prone.
1
1
0
I'm specifically interested in avoiding conflicts when multiple users upload (upload_file) slightly different versions of the same python file or zip contents. It would seem this is not really a supported use case as the worker process is long-running and subject to the environment changes/additions of others. I like the library for easy, on-demand local/remote context switching, so would appreciate any insight on what options we might have, even if it means some seamless deploy-like step for user-specific worker processes.
What options exist for segregating python environments in a mult-user dask.distributed cluster?
1.2
0
0
77
46,627,610
2017-10-08T04:49:00.000
0
0
1
0
tensorflow,scipy,ubuntu-16.04,python-import
59,047,693
2
false
0
0
imread and imsave are deprecated in scipy.misc Use imageio.imread instead after import imageio. For saving - Use imageio.imsave instead or use imageio.write For resizing use skimage.transform.resize instead after import skimage
1
3
0
I have successfully installed scipy, numpy, and pillow, however I get error as below ImportError: cannot import name 'imread'
Successfully installed SciPy, but "from scipy.misc import imread" gives ImportError: cannot import name 'imread'
0
0
0
1,862
46,627,759
2017-10-08T05:16:00.000
2
0
1
0
python,ubuntu
46,627,783
1
false
0
0
it sounds to me that you have a file with a bunch of data that really should be a database. please consider using a database instead of a file to represent the 500,000 records nested list. this will have the effect of increasing the performance of your current set up and also will allow you to execute complex queries and indexing into the data. if you don't feel like networking and all that jazz I also recommend that you use SQLITE. SQLITE has C and C++ bindings that allow you to easily use it from python among other languages and also is very efficient.
1
0
0
Problem: Made a huge nested list with over 500k combinations. When loading or running terminal and visual studio the laptop freezes. I just upgraded ram from 4gb(2×2) to 8gb(1×8). I am planning to add another 8gb stick. Cpu: i5-2520m Question: Is it lack of ram or processor that might bee causing the laptop to freeze. Note: I use a cooling pad.
Freezing due to lack of ram?
0.379949
0
0
43
46,629,979
2017-10-08T10:18:00.000
0
0
1
0
python,python-3.x,cmake,python-2.x
46,632,429
1
false
0
0
Ok, I could to find a solution. Just create a FindPythonLibs3 to look for Python 3 paths and set new variables for each path. It will be possible to use these paths for Python 3 and the others for Python 2 without conflict.
1
0
0
I am developing an IDE with plugins to support Python 2 and Python 3. The build system is CMake. The problem is when CMake looks for Python 2, the Python variables will point to Python 2 paths. But now I need get the paths to Python 3 too. Does anyone knows if it is possible to do that? If yes, how?
How I can use CMake for linking Python 2 and Python 3 libraries for a same software
0
0
0
19