Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
46,384,630
2017-09-23T21:55:00.000
2
0
1
0
0
python,visual-studio-code
0
46,384,705
0
2
0
false
0
0
I am not sure how you try to run this program, but you can just go to menu View → Terminal and type python your_program.py in TERMINAL from the folder where your program is located. And please check if you have not accidentally installed Visual Studio instead of Visual Studio Code (those are two completely different programs).
2
3
0
0
I have this very beginner question that I happened to install Visual Studio Code on my Mac, and every time I tried to run a simple Python program on that, it said that I need a workspace to run, so how do I create the workspace?
How can I create a workspace in Visual Studio Code?
0
0.197375
1
0
0
5,816
46,384,630
2017-09-23T21:55:00.000
0
0
1
0
0
python,visual-studio-code
0
69,560,073
0
2
0
false
0
0
VSCode workspaces are basically just folders. If you open an empty folder in VSCode it will get treated as a workspace, and you can add any scripts you want to it. VSCode will create a new hidden folder in the folder you chose that will hold settings for the workspace. For python, make sure you install the python extension (just grab the one with the most downloads) and follow the instructions there to make sure your python environment is properly configured. If you're using git, you might want to add that hidden folder to the gitignore.
2
3
0
0
I have this very beginner question that I happened to install Visual Studio Code on my Mac, and every time I tried to run a simple Python program on that, it said that I need a workspace to run, so how do I create the workspace?
How can I create a workspace in Visual Studio Code?
0
0
1
0
0
5,816
46,392,625
2017-09-24T17:04:00.000
0
0
0
0
0
python,random
0
46,392,727
0
5
0
false
0
0
If you need random integer values between 0 and c use random.randint(0, c). For random floating point values between 0 anc c use random.uniform(0, c).
2
1
1
0
I happen to have a list y = [y1, y2, ... yn]. I need to generate random numbers ai such that 0<=ai<=c (for some constant c) and sum of all ai*yi = 0. Can anyone help me out on how to code this in python? Even pseudocode/logic works, I am having a conceptual problem here. While the first constraint is easy to satisfy, I cannot get anything for the second constraint. EDIT: Take all yi = +1 or -1, if that helps, but a general solution to this would be interesting.
Generating Random Numbers Under some constraints
0
0
1
0
0
1,846
46,392,625
2017-09-24T17:04:00.000
0
0
0
0
0
python,random
0
46,440,005
0
5
0
false
0
0
I like splitting this problem up. Note that there must be some positive and some negative values of y (otherwise sum(ai*yi) can't equal zero). Generate random positive coefficients ai for the negative values of y, and construct the sum of ai*yi over only the negative values of y (let's say this sum is -R). Assuming there are "m" remaining positive values, choose random numbers for the first m-1 of the ai coefficients for positive yi values, according to ai = uniform(R/(m*max(y)). Use your constraint to determine am = (R-sum(aiyi | yi> 0))/ym. Notice that, by construction, all ai are positive and the sum of aiyi = 0. Also note that multiplying all ai by the same amount k will also satisfy the constraint. Therefore, find the largest ai (let's call it amax), and if amax is greater than c, multiply all values of ai by c/(amax + epsilon), where epsilon is any number greater than zero. Did I miss anything?
2
1
1
0
I happen to have a list y = [y1, y2, ... yn]. I need to generate random numbers ai such that 0<=ai<=c (for some constant c) and sum of all ai*yi = 0. Can anyone help me out on how to code this in python? Even pseudocode/logic works, I am having a conceptual problem here. While the first constraint is easy to satisfy, I cannot get anything for the second constraint. EDIT: Take all yi = +1 or -1, if that helps, but a general solution to this would be interesting.
Generating Random Numbers Under some constraints
0
0
1
0
0
1,846
46,394,954
2017-09-24T21:16:00.000
3
1
1
0
0
python,mp3,music21
0
46,396,673
0
3
0
false
0
0
There are ways of doing this in music21 (audioSearch module) but it's more of a proof of concept and not for production work. There are much better software packages for analyzing audio (try sonic visualizer or jMIR or a commercial package). Music21's strength is in working with scores.
3
3
0
0
I am looking for python library to find out a key and tempo of the song recorded in MP3 format. I've found the music21 lib that allows doing that. But it seems like it works only with midi files. Does somebody know how to parse MP3 files using music21 and get the required sound characteristics? If it is impossible, please suggest another library.
Is it possible to analyze mp3 file using music21?
1
0.197375
1
0
0
995
46,394,954
2017-09-24T21:16:00.000
4
1
1
0
0
python,mp3,music21
0
46,395,044
0
3
0
true
0
0
No, this is not possible. Music21 can only process data stored in musical notation data formats, like MIDI, MusicXML, and ABC. Converting a MP3 audio file to notation is a complex task, and isn't something that software can reliably accomplish at this point.
3
3
0
0
I am looking for python library to find out a key and tempo of the song recorded in MP3 format. I've found the music21 lib that allows doing that. But it seems like it works only with midi files. Does somebody know how to parse MP3 files using music21 and get the required sound characteristics? If it is impossible, please suggest another library.
Is it possible to analyze mp3 file using music21?
1
1.2
1
0
0
995
46,394,954
2017-09-24T21:16:00.000
1
1
1
0
0
python,mp3,music21
0
46,417,595
0
3
0
false
0
0
Check out librosa. It can read mp3s and give some basic info such as tempo.
3
3
0
0
I am looking for python library to find out a key and tempo of the song recorded in MP3 format. I've found the music21 lib that allows doing that. But it seems like it works only with midi files. Does somebody know how to parse MP3 files using music21 and get the required sound characteristics? If it is impossible, please suggest another library.
Is it possible to analyze mp3 file using music21?
1
0.066568
1
0
0
995
46,397,244
2017-09-25T03:36:00.000
0
0
0
0
0
python,iphone,mdm
0
46,548,859
0
1
0
true
1
0
Did you remove the Profile from the Device settings or through the Apple Configurator? Many times even if you remove profile from Device settings, its still there in the Device. You can see if its still there in the Device using Apple Configurator and try removing it from there.
1
0
0
0
I have a strange problem at my iphone, when I remove the mdm profile, the camera always be disabled, it's not come back , the mdm profile is only one , how to reset it?
remove the profile but the camera always be disabled
0
1.2
1
0
0
27
46,421,437
2017-09-26T08:29:00.000
3
0
0
0
1
django,python-3.x,django-rest-framework,django-serializer
0
46,424,881
0
1
0
true
1
0
That's because you are using the browsable API. JSON renderer will only call it once. Browsable API needs several calls: for the result itself for the raw data tab when you can modify a resource through PUT for the raw data tab when you can modify a resource through PATCH for the HTML form tab
1
0
0
0
Lets say I have a model called Thingy, and there are 20 Thingies in my database. When I retrieve all Thingies, serializer.to_represenatation() is executed 20 times. This is good. However, when I retrieve just a single Thingy from /api/thingies/1, I observe that serializer.to_representation() is executed four (4!!!) times. Why does this happen, and how can I get away with just one call to to_representation()?
Why does retrieving a single resource execute serializer.to_representation() multiple times in Django REST framework?
0
1.2
1
0
0
229
46,432,544
2017-09-26T17:29:00.000
3
0
1
0
0
python,automation,pywinauto
1
46,433,055
0
2
0
true
0
0
You can add found_index=0 or other index to the window specification object. This is the first way to disambiguate the search. Also there are methods .children() and .descendants() with additional params like control_type or title (as I remember title should work), but some window specification params are not supported in these methods.
1
1
0
0
I'm using a WPF application that has custom stack panel, which is basically a list. The item in the list is exactly the same so I'm not able to select a specific text to uniquely identify the elements. And some other values such as time are dynamic. Is there a way for me to get the list of elements returned. I know it's possible because the error was thrown, ElementAmbiguousError state the count. If I could do that, then from that list I can use the index and validate what I need.
Pywinauto how do I get the list of returned elements
0
1.2
1
0
0
6,545
46,444,065
2017-09-27T09:20:00.000
0
0
1
0
0
javascript,java,python
0
46,444,255
1
1
0
false
0
0
Write protection normally only exists for complete files. So you could revoke write permissions for the file, but then also appending isn't possible anymore. For ensuring that no tampering has taken place, the standard way would be to cryptographically sign the data. You can do this like this, in principle: Take the contents of the file. Add a secret key (any arbitrary string or random characters will do, the longer the better) to this string. Create a cryptographical checksum (SHA256 hash or similar). Append this hash to the file. (Newlines before and after.) You can do this again every time you append something to the file. Because nobody except you knows your secret key, nobody except you will be able to produce the correct hash codes of the part of the file above the hash code. This will not prevent tampering but it will be detectable. This is relatively easily done using shell utilities like sha256sum for mere text files. But you have a JSON structure in a file. This is a complex case because the position in the file does not correlate with the age of the data anymore (unlike in a text file which is only being appended to). To still achieve what you want you need to have an age information on the data. Do you have this? If you provide the JSON structure as @Rohit asked for we might be able to give more detailed advice.
1
0
0
0
I need to store some date stamped data in a JSON file. It is a sensor output. Each day the same JSON file is updated with the additional data. Now, is it possible to put some write protection on already available data to ensure that only new lines could be added to the document and no manual tampering should occur with it? I suspect that creating checksums after every update may help, but I am not sure how do I implement it? I mean if some part of JSON file is editable then probably checksum is also editable. Any other way for history protection?
Is it possible to write protect old data of JSON Files and only enable appending?
1
0
1
0
0
87
46,492,388
2017-09-29T15:39:00.000
3
0
0
0
0
python,database,sqlite
0
46,492,537
0
2
0
false
0
0
SQLite3 is embedded-only database so it does not have network connection capabilities. You will need to somehow mount the remote filesystem. With that being said, SQLite3 is not meant for this. Use PostgreSQL or MySQL (or anything else) for such purposes.
1
4
0
0
I have a question about sqlite3. If I were to host a database online, how would I access it through python's sqlite3 module? E.g. Assume I had a database hosted at "www.example.com/database.db". Would it be as simple as just forming a connection with sqlite3.connect ("www.example.com/database.db") or is there more I need to add so that the string is interpreted as a url and not a filename?
Connecting to an online database through python sqlite3
0
0.291313
1
1
0
1,622
46,516,325
2017-10-01T19:56:00.000
0
0
0
0
0
python,machine-learning,cluster-analysis,categorical-data
0
51,921,509
0
1
1
false
0
0
Agreeing with @DIMKOIM, Multiple Correspondence Analysis is your best bet. PCA is mainly used for continuous variables. To visualize your data, you can build a scatter plot from scratch.
1
0
1
0
I have a high-dimensional dataset which is categorical in nature and I have used Kmodes to identify clusters, I want to visualize the clusters, what would be the best way to do that? PCA doesn't seem to be a recommended method for dimensionality reduction in a categorical dataset, how to visualize in such a scenario?
How to plot a cluster in python prepared using categorical data
0
0
1
0
0
1,437
46,536,893
2017-10-03T03:44:00.000
0
0
1
0
1
python,ubuntu,tensorflow,pycharm,tensorflow-gpu
0
50,614,889
0
1
0
true
0
0
Actually the problem was, the python environment for the pycharm project is not the same as which is in run configurations. This issue was fixed by changing the environment in run configurations.
1
1
1
0
When i'm running my tensorflow training module in pycharm IDE in Ubuntu 16.04, it doesn't show any training with GPU and it trains usually with CPU. But When i run the same python script using terminal it runs using GPU training. I want to know how to configure GPU training in Pycharm IDE.
Tensorflow GPU doesn't work in Pycharm
0
1.2
1
0
0
691
46,562,267
2017-10-04T10:10:00.000
0
0
0
0
0
python-3.x,rest,api,security,authentication
0
46,562,640
0
1
0
false
1
0
Put an API Gateway in front of your API , your API Gateway is publicly ( i.e in the DMZ ) exposed while the actual API are internal. You can look into Kong..
1
0
0
0
For the last few months i've been working on a Rest API for a web app for the company I work for. The endpoints supply data such as transaction history, user data, and data for support tickets. However, I keep running into one issue that always seems to set me back to some extent. The issue I seem to keep running into is how do I handle user authentication for the Rest API securely? All data is going to be sent over a SSL connection, but there's a part of me that's paranoid about potential security problems that could arise. As it currently stands when a client attempts to login the client must provide a username or email address, and a password to a login endpoint (E.G "/api/login"). Along with with this information, a browser fingerprint must be supplied through header of the request that's sending the login credentials. The API then validates whether or not the specified user exists, checks whether or not the password supplied is correct, and stores the fingerprint in a database model. To access any other endpoints in the API a valid token from logging in, and a valid browser fingerprint are required. I've been using browser fingerprints as a means to prevent token-hijacking, and as a way make sure that the same device used to login is being used to make the requests. However, I have noticed a scenario where this practice backfires on me. The client-side library i'm using to generate browser fingerprints isn't always accurate. Sometimes the library spits out a different fingerprint entirely. Which causes some client requests to fail as the different fingerprint isn't recognized by the API as being valid. I would like to keep track of what devices are used to make requests to the API. Is there a more consistent way of doing so, while still protecting tokens from being hijacked? When thinking of the previous question, there is another one that also comes to mind. How do I store auth tokens on client-side securely, or in a way that makes it difficult for someone to obtain the tokens through malicious means such as a xss-attack? I understand setting a strict Content-Security Policy on browser based clients can be effective in defending against xss-attacks. However, I still get paranoid about storing tokens as cookies or in local storage. I understand oauth2 is usually a good solution to user authentication, and I have considered using it before to deal with this problem. Although, i'm writing the API using Flask, and i'm also using JSON Web tokens. As it currently stands, Flask's implementation of oauth2 has no way to use JWTs as access tokens when using oauth for authentication. This is my first large-scale project where I have had to deal with this issue and i am not sure what to do. Any help, advice, or critiques are appreciated. I'm in need of the help right now.
How to handle Rest API user authentication securely?
0
0
1
0
1
260
46,583,487
2017-10-05T10:31:00.000
0
0
1
0
1
python,data-structures,microcontroller,micropython
0
47,052,669
0
1
1
false
0
0
Sorry, but your question contains the answer - if you need to work with 32x32 tiles, the best format is that which represents your big image as a sequence of tiles (and e.g. not as one big 256x256 image, though reading tiles out of it is also not a rocket science and should be fairly trivial to code in MicroPython, though 32x32 tiles would be more efficient of course). You don't describe the exact format of your images, but I wouldn't use pickle module for it, but store images as raw bytes and load them into array.array() objects (using inplace .readinto() operation).
1
0
1
0
I'm writing some image processing routines for a micro-controller that supports MicroPython. The bad news is that it only has 0.5 MB of RAM. This means that if I want to work with relatively big images/matrices like 256x256, I need to treat it as a collection of smaller matrices (e.g. 32x32) and perform the operation on them. Leaving at aside the fact of reconstructing the final output of the orignal (256x256) matrix from its (32x32) submatrices, I'd like to focus on how to do the loading/saving from/to disk (an SD card in this case) of this smaller matrices from a big image. Given that intro, here is my question: Assuming I have a 256x256 on disk that I'd like to apply some operation onto (e.g. convolution), what's the most convenient way of storing that image so it's easy to load it into 32x32 image patches? I've seen there is a MicroPython implementation of the pickle module, is this a good idea for my problem?
Load portions of matrix into RAM
0
0
1
0
0
52
46,594,866
2017-10-05T21:11:00.000
1
0
0
0
0
python,sql,database,orm,sqlalchemy
0
46,777,010
0
2
0
true
1
0
After asking around in #sqlalchemy IRC, it was pointed out that this could be done using ORM-level relationships in an before_flush event listener. It was explained that when you add a mapping through a relationship, the foreign key is automatically filled on flush, and the appropriate insert statement generated by the ORM.
1
1
0
0
So I have two table in a one-to-many relationship. When I make a new row of Table1, I want to populate Table2 with the related rows. However, this population actually involves computing the Table2 rows, using data in other related tables. What's a good way to do that using the ORM layer? That is, assuming that that the Table1 mappings are created through the ORM, where/how should I call the code to populate Table2? I thought about using the after_insert hook, but i want to have a session to pass to the population method. Thanks.
Populating related table in SqlAlchemy ORM
0
1.2
1
1
0
601
46,599,013
2017-10-06T05:44:00.000
0
0
1
0
0
python,list,dictionary,key-value
0
46,599,399
0
6
0
false
0
0
class is used when there is both state and behavior . dict is used when there's only state. since you have both use class.
2
1
0
1
I'm a beginner at programming (and also an older person), so I don't really know how to express what I want correctly. I've tried to describe my question as thoroughly as possible, so really appreciate your patience! I would like to store the winning scores associated with each user. Each user would have different number of winning scores. I do not need to seperate users first name and last name, they can all be one string. I do not need the scores to be ordered in any way, and I don't need to be able to change them. I only need to be able to add the scores whenever the user wins and sort users by amount of wins. I will extract all the winning scores for statistics, but the statistics will not be concerned with what score belongs to what user. Once the program stops, it can all be erased from memory. From what I've researched so far it seems my best options are to either create a user class, where I store a list and add to the list each time. Or to create a dictionary with a key for each user. But since each user may have a different amount of winning scores, I don't know if I can use dictionaries with that (unless each key is associated with a list maybe?). I don't think I need something like a numpy array, since I want to create very simple statistics without worrying about what score belongs to what user. I need to think about not using an unnecessary amount of memory etc., especially because there may be a hundred winning scores for each user. But I can't really find clear information on what the benefits of dictionaries vs classes are. The programming community is amazingly helpful and full of answers, but unfortunately I often don't understand the answers. Greatful for any help I can get! And don't be afraid to tell me my ideas are dumb, I want to learn how to think like a programmer.
Beginners python. Best way to store different amount of items for each key?
0
0
1
0
0
69
46,599,013
2017-10-06T05:44:00.000
1
0
1
0
0
python,list,dictionary,key-value
0
46,600,049
0
6
0
false
0
0
You can use Dictionary as values in dicts can be mutable like a list where you can keep all the scores/winning scores for each user. {'player1' : [22,33,44,55], 'player2' : [23,34,45], ..... } If this is not an exercise that you will repeat dicts make sense but if it is an exercise that might need to be done again in future Classes are better alternative as explained in other answers by Stuart and Hallsville3. Hope it helps!
2
1
0
1
I'm a beginner at programming (and also an older person), so I don't really know how to express what I want correctly. I've tried to describe my question as thoroughly as possible, so really appreciate your patience! I would like to store the winning scores associated with each user. Each user would have different number of winning scores. I do not need to seperate users first name and last name, they can all be one string. I do not need the scores to be ordered in any way, and I don't need to be able to change them. I only need to be able to add the scores whenever the user wins and sort users by amount of wins. I will extract all the winning scores for statistics, but the statistics will not be concerned with what score belongs to what user. Once the program stops, it can all be erased from memory. From what I've researched so far it seems my best options are to either create a user class, where I store a list and add to the list each time. Or to create a dictionary with a key for each user. But since each user may have a different amount of winning scores, I don't know if I can use dictionaries with that (unless each key is associated with a list maybe?). I don't think I need something like a numpy array, since I want to create very simple statistics without worrying about what score belongs to what user. I need to think about not using an unnecessary amount of memory etc., especially because there may be a hundred winning scores for each user. But I can't really find clear information on what the benefits of dictionaries vs classes are. The programming community is amazingly helpful and full of answers, but unfortunately I often don't understand the answers. Greatful for any help I can get! And don't be afraid to tell me my ideas are dumb, I want to learn how to think like a programmer.
Beginners python. Best way to store different amount of items for each key?
0
0.033321
1
0
0
69
46,600,280
2017-10-06T07:17:00.000
0
0
1
0
0
python,python-3.x,anaconda,spyder
1
46,607,726
0
1
0
false
0
0
The problem is you have two Python versions installed in your system: one in C:\ProgramData\Anaconda3\ and the other in C:\Users\Jaker\AppData\Roaming\Python\Python36. Please uninstall one of them and the problem will be solved.
1
0
0
0
Tried anaconda today , it seems fine but when I tried to launch Spyder each time I get this error: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\Scripts\spyder-script.py", line 6, in <module> from spyder.app.start import main File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\app\start.py", line 23, in <module> from spyder.utils.external import lockfile File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\external\lockfile.py", line 22, in <module> import psutil File "C:\Users\Jaker\AppData\Roaming\Python\Python36\site-packages\psutil\__init__.py", line 126, in <module> from . import _pswindows as _psplatform File "C:\Users\Jaker\AppData\Roaming\Python\Python36\site-packages\psutil\_pswindows.py", line 16, in <module> from . import _psutil_windows as cext ImportError: cannot import name '_psutil_windows' Any help regarding this ? Also how do I get python 3.6.3 in anaconda..?
Error installing Spyder in anaconda
0
0
1
0
0
927
46,605,599
2017-10-06T12:19:00.000
1
0
0
0
0
python,python-telegram-bot
0
46,637,593
0
1
0
false
0
0
To be exact you can find the message object (or/and the text) of a callback_query at update.callback_query.message.text. Or for convenience you can always use update.effective_chat, update.effective_message and update.effective_user to access the chat, message and from_user objects wherever they are in (no matter it's a normal message, a callback_query, an inline_query etc.)
1
0
0
0
update.callback_query.from_user Inside the same function I used update.message.text where i tried to get message from user update.message.text is not working giving me 'NoneType' object has no attribute 'text' how can i use two update in same function
I used Update.callback_query after which i used Update.message in same function its not working
1
0.197375
1
0
0
1,504
46,620,657
2017-10-07T13:23:00.000
1
0
1
0
0
python,random
0
46,620,696
0
1
0
false
0
0
1 - Start with a list initialized with 5 items (maybe None?) 2 - place the walker at index 2 3 - randomly chose a direction (-1 or + 1) 4 - move the walker in the chosen direction 5 - maybe print the space and mark the location of the walker 6 - repeat at step 3 as many times as needed
1
0
1
0
Start with a one dimensional space of length m, where m = 2 * n + 1. Take a step either to the left or to the right at random, with equal probability. Continue taking random steps until you go off one edge of the space, for which I'm using while 0 <= position < m. We have to write a program that executes the random walk. We have to create a 1D space using size n = 5 and place the marker in the middle. Every step, move it either to the left or to the right using the random number generator. There should be an equal probability that it moves in either direction. I have an idea for the plan but do not know how to write it in python: Initialize n = 1, m = 2n + 1, and j = n + 1. Loop until j = 0 or j = m + 1 as shown. At each step: Move j left or right at random. Display the current state of the walk, as shown. Make another variable to count the total number of steps. Initialize this variable to zero before the loop. However j moves always increase the step count. After the loop ends, report the total steps.
How to use random numbers that executes a one dimensional random walk in python?
0
0.197375
1
0
0
327
46,621,609
2017-10-07T14:59:00.000
0
0
0
0
1
python,django,architecture,django-rest-framework,cloudflare
0
46,624,369
0
1
1
true
1
0
This sounds like a single project that is being split as part of the deployment strategy. So it makes sense to use just a single codebase for it, rather than splitting it into two projects. If that's the case, re-use is a non-issue since both servers are using the same code. To support multiple deployments, then, you just create two settings files and load the appropriate one on the appropriate server. The degree to which they are different is up to you. If you want to support different views you can use different ROOT_URLCONF settings. The INSTALLED_APPS can be different, and so forth.
1
1
0
0
I currently have a Django project running on a server behind Cloudflare. However, a number of the apps contain functionality that requires synchronizing data with certain web services. This is a security risk, because these web services may reveal the IP address of my server. Therefore I need a solution to prevent this. So far I came up two alternatives: using a proxy or splitting the project to two servers. One server responsible for responding to requests through Cloudflare and one server responsible for synchronizing data with other web services. The IP address of the latter server will be exposed to the public, however attacks on this server will not cause the website to be offline. I prefer the second solution, because this will also split the load between two servers. The problem is that I do not know how I should do this with Django without duplicating code. I know I can re-use apps, but for most of them counts that I, for instance, only need the models and the serializers and not the views etc. How should I solve this? What is the best approach to take? In addition, what is an appropriate naming for the two servers? Thanks
django - split project to two servers (due to Cloudflare) without duplicating code
0
1.2
1
0
0
68
46,630,267
2017-10-08T10:52:00.000
3
0
1
0
0
python,python-2.7,python-3.x,pip
0
46,630,308
0
1
0
true
0
0
If you have the 2 versions really installed, you should have a pip2 or pip2.x available in your PATH
1
2
0
0
I would like to ask about how to use pip install for Python 2.7, when I previously installing for and using Python 3.6 ? (*I now have to versions of Python on Windows) pip install ... keeps installing for Python 3.6 I need to use the previous version, to rewrite the code in Python 2.7. (this is for building a Kivy app, although Kivy says it now supports Python 3 but it also says * Warning.) In order to do this, I have to import necessary modules : kivy and numpy. Hope for feedbacks on this, Thanks.
How to python pip install for Python 2.7, having using Python 3.6 before on Windows
0
1.2
1
0
0
1,491
46,636,125
2017-10-08T21:13:00.000
0
0
0
1
0
python,terminal
0
46,636,156
0
1
0
false
0
0
As fas as I can understand from your question, you can make the terminal to be fullscreen by pressing F11 (at least in Ubuntu)
1
0
0
0
how do I write/execute a Python script full screen at the terminal. I want to write a small Programm which shoud be shown like "vim", "sl", or "nano".
Execute Python script in terminal fullscreen
0
0
1
0
0
675
46,642,598
2017-10-09T09:05:00.000
0
0
1
0
0
python,python-3.x,count
0
46,642,725
0
2
0
false
0
0
You can use a variable that will increment when the function calls or stops.
1
0
0
0
I am trying to figure out how to code how many times my main program has been called. I want to count how many times my program has been called, lets call the program "test". In the program different sub-functions can be called. therefore i want to be able to count those aswell, let's call them "program-1, Program-2 ...etc" Also i want to be able to see how many times the program has been stopped, like how many times a user needed to push the kill-switch. Anyone out there who might have any idea how to do this?
Count number of times my code has been called?
0
0
1
0
0
198
46,675,769
2017-10-10T20:49:00.000
0
1
0
0
0
python-2.7,selenium
0
46,677,415
0
1
0
false
0
0
I don't see why you couldn't. You can install pip with apt install python-pip, you'll probably need to sudo that command unless you login as root. Then you can just open a terminal and use the pip install command to get selenium. If that doesn't work you can try running python -m pip install instead.
1
0
0
1
i have had a rough time getting my scripts to work on my raspberry pi zero w and the last program i need installed requires selenium. This script was designed for windows 10 + python 2.7 because i make my scripts in this environment. I was wondering if it is possible to use selenium on a raspberry pi zero w and preferably headless if possible. I can't find any info, help or guidelines online anywhere and have no idea how to use pip in raspbian (if it even has pip).
Selenium (Maybe headless) on raspberry pi zero w
1
0
1
0
1
757
46,689,334
2017-10-11T13:21:00.000
3
0
1
0
0
python,colors,visual-studio-code,themes
0
63,959,173
0
3
0
false
0
0
Leonard's answer above is perfect. Just for the googlers in 2020, I would add that the command in Command Palette: Developer: Inspect TM Scopes seems to have changed to: Developer: Inspect Editor Tokens and Scopes. This option shows both the standard token types, as well as TextMate scopes.
1
14
0
0
Could some one explain to me please how to customize docstring color for Python in VSCode's default theme? I want to do it thru User Settings because want to be able to save my config file. I tried to use "editor.tokenColorCustomizations": {} but it affects all strings.
How to customize docstring color for Python in VSCode's default theme?
0
0.197375
1
0
0
4,945
46,696,267
2017-10-11T19:41:00.000
4
1
0
0
0
python,pycharm,remote-debugging
1
46,715,433
0
1
0
true
0
0
Solve the problem. There are two places to edit the same remote interpreter. One is from Default Setting-> Project Interpreter -> Setting Icon -> More -> edit icon, another is from Tools -> Deployment -> Configuration. The settings in both places need to be correct for the same remote interpreter. For some reason, the password in my first location was cleared.
1
1
0
0
I have been using remote interpreter all the times before, but suddenly it shows failed error message: can't run python interpreter: error connecting to remote host: I am using SFTP, and I have tried "Test SFTP connection", got success message with the same host. I am wondering how do I see verbose message in the remote debugging connection. I am using PyCharm 2017.2 professional.
Pycharm stopped working in remote interpreter
0
1.2
1
0
0
1,301
46,701,431
2017-10-12T04:29:00.000
0
0
1
0
0
python,arrays
1
46,701,588
0
1
0
false
0
0
Length of array: len(array). Try to do 2 cycles to spread all values to 2-d array.
1
0
1
0
I am trying to find the length of a 1-D float array and convert it into a 2-d array in python. Also when I am trying to print the elements of the float array the following error is coming:- 'float' object is not iterable
How to find the number of elements of a float array in python and how to convert it to a 2-dimensional float array?
0
0
1
0
0
608
46,708,236
2017-10-12T11:16:00.000
0
1
0
0
0
c#,python,json,rest,web-services
0
46,719,477
0
1
0
false
0
0
It is clearly difficult to provide an answer with little details. TL;DR is that it depends on what game you are developing. However, polling is very inefficient for at least three reasons: The former, as you have already pointed out, it is inefficient because you generate additional workload when there is no need The latter, because it requires TCP - server-generated updates can be sent using UDP instead, with some pros and cons (like potential loss of packets due to lack of ACK) You may get the updates too late, particularly in the case of multiplayer games. Imagine that the last update happened right after the previous poll, and your poll is each 5 seconds. The status could be already stale. The long and the short of it is that if you are developing a turn-based game, poll could be alright. If you are developing (as the use of Unity3D would suggest) a real-time game, then server-generated updates, ideally using UDP, are in my opinion the way to go. Hope that helps and good luck with your project.
1
0
0
0
I have a game where I have to get data from the server (through REST WebService with JSON) but the problem is I don't know when the data will be available on the server. So, I decided to use such a method that hit Server after specific time or on request on every frame of the game. But certainly this is not the right, scale able and efficient approach. Obviously, hammering is not the right choice. Now my question is that how do I know that data has arrived at server so that I can use this data to run my game. Or how should I direct the back-end team to design the server in an efficient way that it responds efficiently. Remember at server side I have Python while client side is C# with unity game-engine.
Check the data has updated at server without requesting every frame of the game
0
0
1
0
1
46
46,714,971
2017-10-12T16:48:00.000
0
0
0
0
0
python,sql,sqlalchemy,amazon-redshift
0
46,715,732
0
2
0
false
0
0
If you don't run much else on that machine then memory should not be an issue. Give it a try. Monitor memory use during the execution. Also use "load" to see what pressure on the system is.
1
0
0
0
Im going to run query that returns a huge table (about 700Mb) from Redshift and save it to CSV using SQLAlchemy and python 2.7 on my local machine (mac pro). I've never done this with such a huge queries before and obviously there could be some memory and other issues. My question is what i shall take into account and how to use sql alchemy in order to make the process work? Thanks, Alex
Python/SQLAlchemy: How to save huge redshift table to CSV?
0
0
1
1
0
1,752
46,720,222
2017-10-12T23:12:00.000
2
0
1
0
1
python,pip,conda
0
46,825,476
0
1
0
true
0
0
Try using the below command on windows command prompt or PowerShell: pip install --proxy DOMAIN\username:password@proxyserver:port packagename Replace the DOMAIN, username, password, proxy server and port with values specific to your system. This works for a windows 10 installation authenticated by Active Directory that is behind a corporate proxy server.
1
2
0
0
In R I can use install.packages("pkgName") to install a new package no problems. But when I tried python and do pip install package it fails with error Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 11004] getaddrinfo failed',)': /simple/pyarrow/ I think it's because pip doesn't know how to automatically detect the proxy (that's gets set in Internet Explorer) like R can. Most of the info I find online either don't work or is just too complicated for someone without specialist knowledge to follow. conda install fails as well. Is there an easy fix to this?
How to use conda/pip install to install packages behind a corporate proxy?
0
1.2
1
0
0
2,851
46,742,589
2017-10-14T08:41:00.000
0
0
1
0
0
python,unicode,utf-8
0
46,790,379
0
3
0
false
0
0
Python source is only plain ASCII, meaning that the actual encoding does not matter except for litteral strings, be them unicode strings or byte strings. Identifiers can use non ascii characters (IMHO it would be a very bad practice), but their meaning is normally internal to the Python interpreter, so the way it reads them is not really important Byte strings are always left unchanged. That means that normal strings in Python 2 and byte litteral strings in Python 3 are never converted. Unicode strings are always converted: if the special string coding: charset_name exists in a comment on first or second line, the original byte string is converted as it would be with decode(charset_name) if not encoding is specified, Python 2 will assume ASCII and Python 3 will assume utf8
1
0
0
0
Say, I have a source file encoded in utf8, when python interpreter loads that source file, will it convert file content to unicode in memory and then try to evaluate source code in unicode? If I have a string with non ASCII char in it, like astring = '中文' and the file is encoded in gbk. Running that file with python 2, I found that string actually is still in raw gbk bytes. So I dboubt, python 2 interpret does not convert source code to unicode. Beacause if so, the string content will be in unicode(I heard it is actually UTF16) Is that right? And if so, how about python 3 interpreter? Does it convert source code to unicode format? Acutally, I know how to define unicode and raw string in both Python2 and 3. I'm just curious about one detail when the interpreter loads source code. Will it convert the WHOLE raw source code (encoded bytes) to unicode at very beginning and then try to interpret unicode format source code piece by piece? Or instead, it just loads raw source piece by piece, and only decodes what it think should. For example, when it hits the statement u'中文' , OK, decode to unicode. While it hits statment b'中文', OK, no need to decode. Which way the interpreter will go?
when python interpreter loads source file, will it convert file content to unicode in memory?
0
0
1
0
0
93
46,742,682
2017-10-14T08:53:00.000
1
0
0
0
0
python,mysql
0
46,745,333
0
1
0
true
0
0
The COMMIT does not actually return until the data has been... committed... so, yes, once you have committed any transaction, the work from that transaction is entirely done, as far as your application is concerned.
1
0
0
0
I have MySQL database where I'm loading big files which insert more than 190 000 rows. I'm using python script which is doing some stuff and then load data from csv file into mysql execute query and commit. My question is if I'm sending such a big file, is database ready after commit command or how to trigger when all datas are inserted in database?
MySQL commit trigger done
0
1.2
1
1
0
54
46,759,726
2017-10-15T20:35:00.000
0
0
0
0
0
python,screen-scraping
0
46,760,481
0
1
0
false
1
0
hacking a game I see. Provided you are aware that what you are doing may diminish the validity of other's playtime as well as potentially committing a crime, I shall provide a solution: You would need to get a piece of "sniffing" software which allows modifications. The modifications are likely to be the addition of "Querystring" and "JSON" parsers to read the data traffic. At this point, you can begin learning how their particular system works, slowly replacing traffic with modified versions for your nefarious purposes. "TCP Sniffing" includes creating a "RAW TCP SOCKET" in whatever language and then repeatedly "READ'ing / RECV'ing" from that socket. The socket MUST be bound TO THE SPECIFIC NETWORK INTERFACE CARD (NIC). Hint: "LOCALHOST" and "127.0.0.1" are NOT the addresses of any NIC. You would then parse the data as a HTTP req/res stream, ensuring that you can read the contents of the frame correctly. You would then be looking to either modify the contents of the POST body or the GET querystring. Either, depending on how the game designers designed their network system.
1
0
0
0
I know very little about js and I'm trying to create a program that will get information about a browser based javascript game while I play it. I can't use a webdriver as I will be playing the game at the time. When I inspect the js on google chrome and look at the console, I can see all the information that I want to work with but I don't know how I can save that to a file or access it at the time in order to parse it. Preferably I'd be able to do this with python as that's what I will use for my code that will handle the info once I have it. Any help or a point in the right direction would be appreciated, thank you :) ps, I'm on Windows if that's important
How to scrape javascript while using a webpage normally?
0
0
1
0
1
36
46,782,716
2017-10-17T04:53:00.000
2
0
1
0
1
python
0
46,782,764
0
2
0
false
0
0
I assume that tuple.__hash__() calls hash(item) for each item in the tuple and then XOR's the results together. If one of the items isn't hashable, then that will raise a TypeError that bubbles up to the original caller.
2
5
0
0
For example, the tuple (1,[0,1,2]). I understand why from a design perspective; if the tuple were still hashable, then it would be trivial to make any unhashable type hashable by wrapping it in a tuple, which breaks the correct behavior of hashability, since you can change the value of the object without changing the hash value of the tuple. But if the tuple is not hashable, then I don't understand what makes an object hashable -- I thought it simply had to have __hash__(self) implemented, which tuple does. Based on other answers I've looked at, and from testing examples, it seems that such an object is not hashable. It seems like sensible behavior would be for tuple.__hash__() to call __hash__ for its component objects, but I don't understand how that would work from an implementation perspective, e.g. I don't know how a dictionary recognizes it as an unhashable type when it is still type tuple and tuple still defines __hash__.
Why is a tuple containing an unhashable type unhashable?
0
0.197375
1
0
0
765
46,782,716
2017-10-17T04:53:00.000
5
0
1
0
1
python
0
46,782,773
0
2
0
true
0
0
tuple implements its own hash by computing and combining the hashes of the values it contains. When hashing one of those values fails, it lets the resulting exception propagate unimpeded. Being unhashable just means calling hash() on you triggers a TypeError; one way to do that is to not define a __hash__ method, but it works equally well if, in the course of your __hash__ method you raise a TypeError (or any other error really) by some other means. Basically, tuple is a hashable type (isinstance((), collections.abc.Hashable) is true, as is isinstance(([],), collections.abc.Hashable) because it's a type level check for the existence of __hash__), but if it stores unhashable types, any attempt to compute the hash will raise an exception at time of use, so it behaves like an unhashable type in that scenario.
2
5
0
0
For example, the tuple (1,[0,1,2]). I understand why from a design perspective; if the tuple were still hashable, then it would be trivial to make any unhashable type hashable by wrapping it in a tuple, which breaks the correct behavior of hashability, since you can change the value of the object without changing the hash value of the tuple. But if the tuple is not hashable, then I don't understand what makes an object hashable -- I thought it simply had to have __hash__(self) implemented, which tuple does. Based on other answers I've looked at, and from testing examples, it seems that such an object is not hashable. It seems like sensible behavior would be for tuple.__hash__() to call __hash__ for its component objects, but I don't understand how that would work from an implementation perspective, e.g. I don't know how a dictionary recognizes it as an unhashable type when it is still type tuple and tuple still defines __hash__.
Why is a tuple containing an unhashable type unhashable?
0
1.2
1
0
0
765
46,783,732
2017-10-17T06:23:00.000
1
0
0
0
0
python,web-applications
0
46,784,214
0
1
0
false
0
1
If I got your idea right then you just want to create python web application. First, you should check out python WebFrameworks and choose right one for you. Then you should check how to work with it on your existing web server. And last but not least (if I didn't forget anything) you should check some info on front-end programming. Sorry, if I gor your idea wrong or misguiding you at somwe point.
1
0
0
0
I wrote a really simple app in Python for a school project. It is for a skillsharing community that our club is trying to start up. Imagine Venmo, but without any money involved. It's essentially a record of favors done for those in the community. The users' info is stored as a dictionary within a dictionary of all users. The dictionary is pickled and the .pkl is updated automatically whenever user info is changed. I want the users to be able to access the information online, by logging in via username and password. It will be used regularly by people and security isn't really a concern since it doesn't store personal info and the users are a small group that are in our club. Anyhow, my issue is that I only know how to do backend stuff and basic tkinter GUIs (which, AFAIK can't be used for my needs). I'm a self-taught and uncommitted novice programmer, so I don't even know how to search for what I'm trying to do. I imagine what I need to do is put the program and the .pkl on the server that will host my website. (I have a domain name, but never figured out how to actually make it a website...) From there, I imagine I have to write some code that will create a login screen and allow users to login and view the attributes associated with their profile as well as send "payment" (as favors) to other users. How do I do any/all of this? I'm looking for online resources or explanations from the community that will help me get this project off the ground. Also, telling me what it is called that I am trying to do would be greatly appreciated. Thanks!
How do I make an app available online? [Python]
0
0.197375
1
0
0
60
46,783,775
2017-10-17T06:26:00.000
0
1
1
0
0
python,maya
0
46,792,329
0
1
0
true
0
0
as suggested by Andrea, I opened the commandPort of maya and connected to it using socket in python script. now i can send commands to maya using that python script as long as the maya commandPort is open.
1
1
0
0
I want to create a simple python script which will directly transfer objects from blender to Maya. I created a python script which exports the object from blender to a temp folder. now I want to import that object into Maya without actually going to Maya>file>import. I searched for the solution for a while and found out that I can create a standalone instance of Maya with mayapy.exe and work in non-GUI instance of Maya. but what I want to do is import the object into an already running instance of Maya(GUI version) as soon as the exporting script is done running.
how to give commands to the running instance of maya with mayapy.exe?
0
1.2
1
0
0
249
46,803,803
2017-10-18T06:09:00.000
0
0
0
0
0
python,excel,file,exe,explorer
0
46,803,941
0
1
0
true
0
0
Yes this is perfectly doable. I suggest you look at PyQT5 or TkInter for the user interface, pyexcel for the excel interface and pyinstaller for packaging up an executable as you asked. There are many great tutorials on all of these modules.
1
0
0
1
The program would follow the below steps: Click on executable program made through python File explorer pops up for user to choose excel file to alter Choose excel file for executable program to alter Spits out txt file OR excel spreadsheet with newly altered data to same folder location as the original spreadsheet
Python - how to get executable program to get the windows file browser to pop up for user to choose an excel file or any other document?
0
1.2
1
1
0
123
46,817,031
2017-10-18T18:37:00.000
1
0
0
0
1
python-3.x,bokeh
0
46,832,946
0
1
0
true
0
0
This is a known bug with current versions (around 0.12.10) for now the best workaround is to increase plot.min_border (or p.min_border_left, etc) to be able to accommodate whatever the longest label you expect is. Or to rotate the labels to be parallel to the axis so that they always take up the same space, e.g. p.yaxis.major_label_orientation = "vertical"
1
1
1
0
I have just started exploring bokeh and here is a small issue I am stuck with. This is in regards with live graphs. The problem is with the axis values. Initially if I start with say 10, till 90 it shows correct values but while printing 100, it only show 10 and the last zero(0) is hidden. It's not visible. That is when it switches from 2 digit number to a 3 or more digit number only the first two digits are visible. Is there any figure property I am missing or what I am not sure of.
Bokeh Plots Axis Value don't show completely
0
1.2
1
0
0
601
46,841,117
2017-10-20T01:26:00.000
0
0
0
0
0
python,pandas,periodicity
0
46,842,124
0
1
0
false
0
0
First, you need to define what output you need, then, deduce how to treat the input to get the desired output. Regarding daily data for the first 10 years, it could be a possible option to keep only one day per week. Sub-sampling does not always mean loosing information, and does not always change the final result. It depends on the nature of the collected data: speed of variations of the data, measurement error, noise. Speed of variations: Refer to Shannon to decide whether no information is lost by sampling once every week instead of every day. Given that for the 2 last year, some people had decided to sample only once every week, it seems to say that they have observed that data does not vary much every day and that a sample every week is enough information. That provides a hint to vote for a final data set that would include one sample every week for the total 12 years. Unless they reduced the sampling for cost reason, making a compromise between accuracy and cost of doing the sampling. Try to find in the literature a what speed your data is expected to vary. Measurement error: If the measurement error contains a small epsilon that is randomly positive or negative, then, taking the average of 7 days to make a "one week" data will be better because it will increase the chances to cancel this variation. Otherwise, it is enough to do a sub-sampling taking only 1 day per week and throwing other days of the week. I would try both methods, averaging, and sub-sampling, and see if the output is significantly different.
1
0
1
0
I have a data set which contains 12 years of weather data. For first 10 years, the data was recorded per day. For last two years, it is now being recorded per week. I want to use this data in Python Pandas for analysis but I am little lost on how to normalize this for use. My thoughts Convert first 10 years data also into weekly data using averages. Might work but so much data is lost in translation. Weekly data cannot be converted to per day data. Ignore daily data - that is a huge loss Ignore weekly data - I lose more recent data. Any ideas on this?
Data Periodicity - How to normalize?
0
0
1
0
0
205
46,855,255
2017-10-20T18:32:00.000
0
0
1
0
0
python,parsing,dataframe,web
0
46,855,571
0
1
0
false
0
0
Unless you need to do that in a hurry, you could just chip off letters from the beginning or the end of the string, and check if it's a known word; if it is, cut it off and repeat. With e.g. 50k words 20 letters each, at worst you'll do 1M lookups. With a lookup taking e.g. 5ms (hitting an HDD every time), it will take 5000 seconds (about 1.5 hours), shorter than you'd spend coming up with a better algorithm.
1
0
0
0
I am trying to parse some web domains (tens of thousands) to see if they contain any English words. It is easy for me to parse the domains to grab the main part of the domain with tldextract and then I tried to use enchant to see if they exist in the English dictionary. The problem is I do not know how to split the domains in to multiple words to check, i.e. latimes returns as False but times would return as True. Does anyone know a clever way to do look if there is an english word contained at all in the strings? Thanks!
How to find if english words exist in string
0
0
1
0
0
87
46,860,706
2017-10-21T06:34:00.000
0
0
1
0
0
python,memory,time,operating-system,cpu
0
46,860,767
0
1
0
true
0
0
Efficient way is using time.sleep. Second method is just sleeping (idle) the process for it's own for 1 second. It doesn't use any other resources more that itself. First method is making an another process, which takes more memory space, CPU, etc., and waiting to end (os.system's behavior). Luckily the another process was just timeout, so the result seems same.
1
0
0
0
What is the difference between os.system("timeout 1") and time.sleep(1) in Python? I know the first one will call out the command line and let it do the timeout, but not sure how the second one make the system idle. Also, which one can save more CPU power or make less memory occupied? Thanks!!
What is the difference between os.system("timeout 1") and time.sleep(1)? Python
0
1.2
1
0
0
372
46,864,499
2017-10-21T14:48:00.000
1
0
1
0
0
python-3.x,amazon-web-services,aws-lambda
0
70,804,457
0
3
0
false
0
0
Came across the same issue and found the solution. What you want is remove_permission() on the lambda client
1
3
0
0
I have a lambda function and for that lambda function my cloudwatch event is a trigger on it... at the end of the lambda function i need to delete the trigger (cloud watch event ) on that lambda function programatically using python . how can i do that ? is there any python library to do that?
Delete trigger on a AWS Lambda function in python
1
0.066568
1
0
0
2,408
46,867,272
2017-10-21T19:36:00.000
1
0
1
0
0
python
0
46,867,396
0
4
0
false
0
0
U can use the len() function by specifying the inner sublists movies = [ [list1] , [list2] ] ; print(len(movies[0])); # prints length of 1st sublist print(len(movies[1])); #prints length of second sublist
1
0
0
0
I have a list called movies with two sublists embedded it. Do the sublists have names too? I want to use len() BIF to measure all items in the list and sublists, how do I do that?
Python len function on list with embedded sublists and strings
0
0.049958
1
0
0
65
46,871,792
2017-10-22T07:59:00.000
0
1
0
0
0
python,raspberry-pi,sandbox,iot,kaa
0
48,001,019
0
1
0
false
0
0
Mashid. From what I know, to use a KAA server, you should utilize the SDK obtained when you create a new application on the KAA server. This SDK also functions as an API key that will connect the device with a KAA server (on a KAA server named with Application Token). The platforms provided for using this SDK are C, C ++, Java, Android, and Objective C. There is currently no SDK for the Python platform.
1
0
0
0
I have some sensor nodes. they are connected to a Raspberry Pi 2 and send data on it. the data on Raspberry Pi is sending the data to Thingspeak.com and it shows the data from sensor nodes. now I am developing a Kaa server and wanna see my data (from Raspberry Pi) on Kaa. is there any chance to connect the current programmed Raspberry Pi(in Python) to Kaa? Many thanks, Shid
how to connect a programmed raspberry pi to a Kaa platform?
1
0
1
0
0
495
46,878,736
2017-10-22T20:16:00.000
1
1
0
0
1
python,c++,ide
0
46,878,851
0
1
0
true
0
0
check that in dev-C++ tools > compiler options > directories > c includes and c++ includes have the path to where your Python.h is.
1
1
0
0
I'm using Dev C++. Include Python.h doesnt work, and IDE states it cant find the file or directory. I can pull up the Python.h C file, so I know I have it. How do I connect two-and-two? I imagine I have to tell my IDE where the file path is, but how would I do that?
How to get Dev C++ to find Python.h
0
1.2
1
0
0
1,269
46,898,131
2017-10-23T20:40:00.000
4
1
0
0
0
java,python
0
46,918,732
0
1
0
true
0
0
I think you missed the part where it says but no group of (num_required - 1) bunnies can. I can explain my solution further, but I will ruin the fun. (I'm the owner of that repo). Let's try it with your answer. [[0], [0, 1, 2], [0, 1, 2], [1], [2]] Your consoles are 3. Bunny 2 can open it on its own, Bunny 3 can open it also on his own -> it does NOT satisfy the rule.
1
3
0
0
I'm working my way through Google Foobar and I'm very confused about "Free the Bunny Prisoners". I'm not looking for code, but I could use some insight from anyone that's completed it. First, the problem: Free the Bunny Prisoners You need to free the bunny prisoners before Commander Lambda's space station explodes! Unfortunately, the commander was very careful with her highest-value prisoners - they're all held in separate, maximum-security cells. The cells are opened by putting keys into each console, then pressing the open button on each console simultaneously. When the open button is pressed, each key opens its corresponding lock on the cell. So, the union of the keys in all of the consoles must be all of the keys. The scheme may require multiple copies of one key given to different minions. The consoles are far enough apart that a separate minion is needed for each one. Fortunately, you have already freed some bunnies to aid you - and even better, you were able to steal the keys while you were working as Commander Lambda's assistant. The problem is, you don't know which keys to use at which consoles. The consoles are programmed to know which keys each minion had, to prevent someone from just stealing all of the keys and using them blindly. There are signs by the consoles saying how many minions had some keys for the set of consoles. You suspect that Commander Lambda has a systematic way to decide which keys to give to each minion such that they could use the consoles. You need to figure out the scheme that Commander Lambda used to distribute the keys. You know how many minions had keys, and how many consoles are by each cell. You know that Command Lambda wouldn't issue more keys than necessary (beyond what the key distribution scheme requires), and that you need as many bunnies with keys as there are consoles to open the cell. Given the number of bunnies available and the number of locks required to open a cell, write a function answer(num_buns, num_required) which returns a specification of how to distribute the keys such that any num_required bunnies can open the locks, but no group of (num_required - 1) bunnies can. Each lock is numbered starting from 0. The keys are numbered the same as the lock they open (so for a duplicate key, the number will repeat, since it opens the same lock). For a given bunny, the keys they get is represented as a sorted list of the numbers for the keys. To cover all of the bunnies, the final answer is represented by a sorted list of each individual bunny's list of keys. Find the lexicographically least such key distribution - that is, the first bunny should have keys sequentially starting from 0. num_buns will always be between 1 and 9, and num_required will always be between 0 and 9 (both inclusive). For example, if you had 3 bunnies and required only 1 of them to open the cell, you would give each bunny the same key such that any of the 3 of them would be able to open it, like so: [ [0], [0], [0], ] If you had 2 bunnies and required both of them to open the cell, they would receive different keys (otherwise they wouldn't both actually be required), and your answer would be as follows: [ [0], [1], ] Finally, if you had 3 bunnies and required 2 of them to open the cell, then any 2 of the 3 bunnies should have all of the keys necessary to open the cell, but no single bunny would be able to do it. Thus, the answer would be: [ [0, 1], [0, 2], [1, 2], ] Languages To provide a Python solution, edit solution.py To provide a Java solution, edit solution.java Test cases Inputs: (int) num_buns = 2 (int) num_required = 1 Output: (int) [[0], [0]] Inputs: (int) num_buns = 5 (int) num_required = 3 Output: (int) [[0, 1, 2, 3, 4, 5], [0, 1, 2, 6, 7, 8], [0, 3, 4, 6, 7, 9], [1, 3, 5, 6, 8, 9], [2, 4, 5, 7, 8, 9]] Inputs: (int) num_buns = 4 (int) num_required = 4 Output: (int) [[0], [1], [2], [3]] I can't figure out why answer(5, 3) = [[0, 1, 2, 3, 4, 5], [0, 1, 2, 6, 7, 8], [0, 3, 4, 6, 7, 9], [1, 3, 5, 6, 8, 9], [2, 4, 5, 7, 8, 9]]. It seems to me that [[0], [0, 1, 2], [0, 1, 2], [1], [2]] completely satisfies the requirements laid out in the description. I don't know why you'd ever have keys with a value greater than num_required-1. One possibility I thought of was that there are unwritten rules that say all of the minions/bunnies need to have the same number of keys, and you can only have num_required of each key. However, if that's the case, then [[0, 1, 2], [0, 1, 2], [0, 3, 4], [1, 3, 4], [2, 3, 4]] would be okay. Next I thought that maybe the rule about needing to be able to use all num_required keys at once extended beyond 0, 1, and 2, regardless of how many consoles there are. That is, you should be able to make [6, 7, 8] as well as [0, 1, 2]. However, that requirement would be broken because the first bunny doesn't have any of those numbers. I'm stuck. Any hints would be greatly appreciated!
Google Foobar: Free the Bunny Prisoners clarification
0
1.2
1
0
0
4,292
46,915,455
2017-10-24T16:20:00.000
-1
1
1
1
1
python,centos,rpm,yum
0
63,481,093
0
2
0
false
0
0
To install Python on CentOS: sudo yum install python2/3 (select the version as per requirement) To uninstall Python on CentOS: sudo yum remove python2/3 (select the version as per your requirement) To check version for python3(which you installed): python3 --version To check version for python2 (which you installed): python2 --version
1
3
0
0
Today I messed up the versions of Python on my CentOS machine. Even yum cannot work properly. I made the mistake that I removed the default /usr/bin/python which led to this situation. How could I get back a clear Python environment? I thought remove them totally and reinstall Python may work, but do not know how to do it. Wish somebody could help!
Remove clearly and reinstall python on CentOS
0
-0.099668
1
0
0
19,404
46,918,646
2017-10-24T19:32:00.000
0
0
1
1
1
python,serial-port,pycharm,signals,signal-processing
1
46,918,772
0
3
0
false
0
0
What part of the diagnostic message did you find unclear? Did you consider ensuring writable self-owned files by doing sudo chown -R isozyesil /Users/isozyesil/Library/Caches/pip ? Did you consider sudo pip install bluepy ?
1
2
0
0
I am trying to install bluepy 1.0.5. However, I get receiving error below. Any idea how can i solve it? (I am using Mac OS X El Capitan) 40:449: execution error: The directory '/Users/isozyesil/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/Users/isozyesil/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/95/f900ttf95g1b7h02y2_rtk400000gn/T/pycharm-packaging669/bluepy/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /var/folders/95/f900ttf95g1b7h02y2_rtk400000gn/T/pip-djih0T-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/var/folders/95/f900ttf95g1b7h02y2_rtk400000gn/T/pycharm-packaging669/bluepy/ (1)
Bluepy Installation Error
0
0
1
0
0
2,751
46,927,517
2017-10-25T08:26:00.000
0
1
0
1
1
python,multithreading,server,uwsgi
1
47,568,008
0
1
0
false
1
0
it has solved. the point is that you should create separate connection for each completely separate query to avoid missing data during each query execution
1
0
0
0
I am running a uwsgi application on my linux mint. it has does work with a database and shows it on my localhost. i run it on 127.0.0.1 IP and 8080 port. after that i want to test its performance by ab(apache benchmark). when i run the app by command uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi and get test of it, it works correctly but slowly. so i want to run the app with more than one thread to speed up. so i use --threads option and command is uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi --threads 8 for example. but when i run ab to test it, after 2 or 3 request, my application stops with some errors and i don't know how to fix it. every time i run it, type of errors are different. some of errors are like these: (Traceback (most recent call last): 2014, 'Command Out of Sync') or (Traceback (most recent call last): File "./wsgi.py", line 13, in application return show_description(id) File "./wsgi.py", line 53, in show_description cursor.execute("select * from info where id = %s;" %id) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute result = self._query(query) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query conn.query(q) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/connections.py", line 856, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) 'Packet sequence number wrong - got 1 expected 2',) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/connections.py", line 1057, in _read_query_result or ('Packet sequence number wrong - got 1 expected 2',) Traceback (most recent call last): or ('Packet sequence number wrong - got 1 expected 2',) Traceback (most recent call last): File "./wsgi.py", line 13, in application return show_description(id) File "./wsgi.py", line 52, in show_description cursor.execute('UPDATE info SET views = views+1 WHERE id = %s;', id) File "/home/mohammadhossein/myFirstApp/myappenv/local/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute result = self._query(query) Please help me how to run my uwsgi application wiht more than one thread safety. any help will be welcome
uwsgi application stops with error when running it with multi thread
0
0
1
1
0
238
46,940,171
2017-10-25T19:01:00.000
2
0
1
0
0
python,json,object-literal,object-notation
0
46,940,207
0
1
0
false
0
0
JavaScript literals are not called JSON. JSON derived its name and syntax from JavaScript, but they’re not the same thing. Use “Python literals”.
1
2
0
0
I have a program that can output results either as JSON or Python data structure literals. I am wondering how to succinctly name the latter option.
Is there a name for Python literals? The way JavaScript literals are called JSON?
0
0.379949
1
0
0
58
46,940,780
2017-10-25T19:37:00.000
0
1
0
0
0
python,telegram-bot,python-telegram-bot
0
46,947,479
0
1
0
false
0
0
InlineQueryResultAudio only accepts links while InlineQueryResultCachedAudio only accepts file_id. What you can do is post the files to your own server or upload it elsewhere to use the former one, or use SendAudio to get the file_id of it and use the latter one.
1
0
0
0
Okay i can send audio with some url in inline mode. But how can i send the local audio from the directory? Telegram Bot API return me this: A request to the Telegram API was unsuccessful. The server returned HTTP 400 Bad Request. Response body: [b'{"ok":false,"error_code":400,"description":"Bad Request: CONTENT_URL_INVALID"}']
Telegram Bot API InlineQueryResultAudio
0
0
1
0
1
505
46,941,115
2017-10-25T20:00:00.000
1
0
0
0
0
python,django
0
46,941,490
0
2
1
true
1
0
Django admin is intended for administration purposes. For all intents and purposes it is a direct interface to your database. While I have seen some people building customer facing interfaces using admin, this is most definitely not the way to make a general Django web application. You should define views for your models. You can use built-in APIs to login and authenticate users. You should most likely restrict access to admin to internal users only. As for templates, the modern way of doing things is to dynamically fetch data using an API and do all the UI logic in Javascript. Django can be used very well to provide an API to a frontend. Look into Django REST Framework. The basic idea is to write serializers for your models and have view functions serve the serialized data to the front end. You could go the old school way and render your pages using templates of course. In that case your views would render templates using data provided by your models.
2
1
0
0
I am a total noob with Django, I come from the PHP world and I am used to doing things differently. I'm building an app and I want to change the way the backend looks, I want to use Bootstrap 4 and add a lot of custom stuff e.g. permission based admin views, and I was wondering what is the best practice, or how do more experienced django devs go about it? Do they override all the django.contrib.admin templates, or do they build custom templates and login/register next to it, and use the django.contrib.admin only for the superuser? What is the django way?
Overriding Django admin vs creating new templates/views
0
1.2
1
0
0
451
46,941,115
2017-10-25T20:00:00.000
0
0
0
0
0
python,django
0
46,941,275
0
2
1
false
1
0
Yes. The admin pages is actually for administering the webpage. For user login and registration you create the templates. However, if you want your backend to look different then you can tweak the template for the admin page, admin login page as well. And you can also have permission based admin views. It's okay to over ride the defaults as long as you know what you're doing. Hope that helped.
2
1
0
0
I am a total noob with Django, I come from the PHP world and I am used to doing things differently. I'm building an app and I want to change the way the backend looks, I want to use Bootstrap 4 and add a lot of custom stuff e.g. permission based admin views, and I was wondering what is the best practice, or how do more experienced django devs go about it? Do they override all the django.contrib.admin templates, or do they build custom templates and login/register next to it, and use the django.contrib.admin only for the superuser? What is the django way?
Overriding Django admin vs creating new templates/views
0
0
1
0
0
451
46,998,234
2017-10-29T08:29:00.000
1
0
0
0
0
python,pandas,regression,statsmodels
0
46,998,392
0
1
0
false
0
0
You can apply grouping and then do logistic regression on each group. Or you treat it as multilabel classifier and do "Softmax regression".
1
0
1
0
I have a dataset that includes 7 different covariates and an output variable, the 'success rate'. I'm trying to find the important factors that predict the success rate. One of the covariates in my dataset is a categorical variable that takes on 700 values (0- 700), each representing the ID of the district they're from. How should I deal with this variable while performing logistic regression? If I make 700 dummy columns, how can I make it easier to interpret the results? I'm using Python and statsmodels.
Logistic Regression- Working with categorical variable in Python?
0
0.197375
1
0
0
755
46,999,584
2017-10-29T11:24:00.000
-1
0
0
0
1
python,neural-network,artificial-intelligence,genetic-algorithm
0
46,999,668
0
2
0
false
0
0
Normally you use a seed for genetic algorithms, which should be fixed. It will always generate the same "random" childs sequentially, which makes your approach reproducible. So the genetic algorithm is kind of pseudo-random. That is state of art how to perform genetic algorithms.
1
0
1
0
So I am using a genetic algorithm to train a feedforward neural network, tasked with recognizing a function given to the genetic algorithm. I.e x = x**2 or something more complicated obviously. I realized I am using random inputs in my fitness function, which causes the fitness to be somewhat random for a member of the population, however, still in line with how close it is to the given function obviously. A colleague remarked that it is stranged that the same member of the population doesnt always get the same fitness, which I agree is a little unconventional. However, it got me thinking, is there any reason why this would be bad for the genetic algorithm? I actually think it might be quite good because it enables me to have a rather small testset, speeding up number of generations while still avoiding overfitting to any given testest. Does anyone have experience with this? (fitness function is MSE compared to given function, for a randomly generated testset of 10 iterations)
Random element in fitness function genetic algorithm
1
-0.099668
1
0
0
495
47,009,499
2017-10-30T06:51:00.000
0
0
1
0
0
python,pycharm,parent-child
0
47,011,258
0
2
0
false
0
0
just simply add __init__.py to each directory (dir_1 and dir_2). Note that you don't need to write anything to 2 of __init__.py, it's fine to just leave it blank.
1
1
0
0
Suppose we have a directory: main that includes: directory_1 file_1.py directory_2 file_2.py If the main code is inside file_1.py, how can I import file_2.py? If it helps, I am using pycharm.
how to let files in pycharm see different ones?
0
0
1
0
0
55
47,011,332
2017-10-30T09:01:00.000
0
0
0
0
0
python-3.x,xpath,lxml
0
47,045,098
0
1
0
false
1
0
I solved it myself. What I wanted to do was with using slice, I wanted to traverse to all xpath using iteration where the iterative will be in digits coming from for loop.
1
0
0
0
I want to iterate //span[@class="postNum and contains(text(),1)] This xpath over the range of 1 to 10 and store it in a variable. I want it to be done in HTML format and not XML. pseudo code: for e in range(1,11): xpathvar[e]='//span[@class="postNum and contains(text(),e)]' how to implement this so that xpathvar[1] will contain the first xpath with e=1. I cannot do this because the element in RHS is a string.
How to iterate over a XPath in HTML format using lxml and python?
0
0
1
0
1
204
47,023,092
2017-10-30T19:46:00.000
1
0
0
0
1
python,windows,github,pycharm
0
47,023,220
0
1
0
true
0
0
This may seems like an overkill way of handling the problem but I have fixed it myself by re-installing git on my machine. It seems to actually be the fix for this. Another thing you could do is try git-bash (Git for Windows app) in the future.
1
2
0
0
Fetch failed: Unable to find remote helper for 'https' When I tried to fetch on PyCharm from my GitHub repository, the above is the message I ended up getting. I was wondering how I could fix this.
Can't seem to fetch from GitHub repository in PyCharm
0
1.2
1
0
1
161
47,024,145
2017-10-30T21:05:00.000
3
0
0
0
0
python,openerp,odoo-8
0
57,271,414
0
1
0
true
1
0
You should add those fields to product.template model and then they will be automatically added to product.product by the inheritance. This way you will be able to show the fields in product.template views. I do not know what is the exact problem that you are trying to solve but when you need to add a field to a product you should think if its value is going to be different for each variant of the product (product.product are variants and product.template are the original product). If it is going to have the same value (You want to add it to product.template view so I imagine it is going to have it) then, add it to product.template model. I hope it helps you.
1
1
0
0
I'm trying to add new product.product fields to the default product.template view, problem is, that I've tried many examples, but none seems to be working. The issue is that I do have added these fields to the product.product default view, (as an inherit view) BUT, that view is only available on sales module, the vast majority of the Odoo's product views are from product.template Does anybody has an idea on how to achieve this on the xml view? Is it possible at all? Being product.product the model ?
Add product.product fields to product.template view - Odoo v8
1
1.2
1
0
0
1,258
47,030,098
2017-10-31T07:37:00.000
0
1
0
0
0
python,sms,gsm,sms-gateway
0
47,046,290
0
1
0
true
0
0
If you consider all protocols involved, including radio part, 300+ messages across a good dozen of protocols would have to be sent in order to deliver outgoing SMS to SMSC, and a great deal of waiting and synchronization will have to be involved. This (high overhead) will be your limiting factor and you would probably get 10-15 SMS per minute or so. Reducing overhead is only possible with a different connectivity methods, mostly to eliminate radio part and mobility management protocols. Usual methods are: connecting to a dedicated SMS gateway provider via whatever protocol they fancy, or acting as SMSC yourself and connecting to SS7 network directly.
1
0
0
0
I have a GSM Modem SIM 900D. I am using it with my server and Python code to send and receive messages. I want to know how many Text SMS I could send and receive through this GSM modem per minute.
GSM 900D Module Limit for Text Messages Sending & Receiving
0
1.2
1
0
0
377
47,038,101
2017-10-31T14:42:00.000
0
0
1
0
0
python,database,save
0
47,038,338
0
5
1
false
0
0
Correct me if I'm wrong, but opening, writing to, and subsequently closing a file should count as "saving" it. You can test this yourself by running your import script and comparing the last modified dates.
1
0
1
0
I am writing a program in Python which should import *.dat files, subtract a specific value from certain columns and subsequently save the file in *.dat format in a different directory. My current tactic is to load the datafiles in a numpy array, perform the calculation and then save it. I am stuck with the saving part. I do not know how to save a file in python in the *.dat format. Can anyone help me? Or is there an alternative way without needing to import the *.dat file as a numpy array? Many thanks!
Save data as a *.dat file?
1
0
1
0
0
41,290
47,038,309
2017-10-31T14:52:00.000
2
0
0
0
0
python,django,apache,amazon-ec2,ubuntu-14.04
0
47,038,557
0
1
0
true
1
0
You can set secrets in environment variables and get them in python code as password = os.getenv('ENVNAME').
1
0
0
0
Where to store payment gateway secret key when using python Django with apache server? I don't what to store in settings.py as i will checking this file in my git. Can i do it the same way amazon store AWS ec2 keys. If possible how to do it.
Where to store payment gateway secret key when using python Django with apache server hosted on aws ec2 Ubuntu
0
1.2
1
0
0
112
47,044,392
2017-10-31T20:54:00.000
0
0
0
0
1
python,python-requests,basic-authentication
0
47,045,020
0
2
0
false
0
0
with python requests you can open your session, do your job, then logout with: r = requests.get('logouturl', params={...}) the logout action is just an http Get method.
2
0
0
0
I am creating python script to configure router settings remotely but recently stumbled on problem how to logout or close session after job is done? From searching I found that Basic-Authentication doesn´t have option to logout. How to solve it in python script?
Basic-Auth session using python script
0
0
1
0
1
395
47,044,392
2017-10-31T20:54:00.000
0
0
0
0
1
python,python-requests,basic-authentication
0
47,044,849
0
2
0
true
0
0
Basic auth doesn't have a concept of a logout but your router's page should have some implementation. If not, perhaps it has a timeout and you just leave it. Since you're using the requests module it may be difficult to do an actual logout if there is no endpoint or parameter for it. I think the best one can do at that point is log in again but with invalid credentials. Studying the structure of the router's pages and the parameters that appear in the urls could give you more options. If you want to go a different route and use something like a headless web browser you could actually click a logout button if it exists. Something like Selenium can do this.
2
0
0
0
I am creating python script to configure router settings remotely but recently stumbled on problem how to logout or close session after job is done? From searching I found that Basic-Authentication doesn´t have option to logout. How to solve it in python script?
Basic-Auth session using python script
0
1.2
1
0
1
395
47,048,278
2017-11-01T04:30:00.000
0
0
1
0
0
python,css,jupyter-lab
0
62,358,081
1
5
0
false
0
0
You should change the font size in the website. It should be at Settings->Fonts->Code->Size for code editor and Settings->Fonts->Content->Size for main contents The css file should be <prefix>/share/jupyter/lab/themes/@jupyterlab/<your theme>/index.css. To change the font, find all the place in the css file which looks like setting font, and change it.
2
18
0
0
I recently updated to the most recent version of JupyterLab (0.28.12). I'm on Windows. I've tried adjusting the variables.css file located in \Lib\site-packages\jupyterlab\themes\@jupyterlab\theme-light-extension of my Miniconda/Anaconda folder. I mainly want to change the font family and size, which I've tried using the variables.css file. However, I can't see any changes. I went to the extreme point of deleting both theme folders, but still I can change themes without a problem through the Lab interface. Where are the JupyterLab theme .css files located? Or how can I find them? I've searched for css files and the themes sub folder seems to be the only location for them. I can't seem to find any in my user directory either c:\Users\User\.jupyter where the .css files were for Jupyter Notebook were located. Thanks!
jupyterlab - change styling - font, font size
0
0
1
0
0
36,582
47,048,278
2017-11-01T04:30:00.000
4
0
1
0
0
python,css,jupyter-lab
0
65,618,110
1
5
0
false
0
0
It is now possible to change the font sizes of most elements of the interface via the Settings menu .e.g: Settings->JupyterLab Theme->Increase Code Font Size etc. Note: These do not change if View->Presentation Mode is ticked. To change the font style one still needs to go to Settings->Advanced Settings Editor (as mentioned in other answers) - and one can also changes font sizes there - which will take effect even if Presentation Mode is enabled.
2
18
0
0
I recently updated to the most recent version of JupyterLab (0.28.12). I'm on Windows. I've tried adjusting the variables.css file located in \Lib\site-packages\jupyterlab\themes\@jupyterlab\theme-light-extension of my Miniconda/Anaconda folder. I mainly want to change the font family and size, which I've tried using the variables.css file. However, I can't see any changes. I went to the extreme point of deleting both theme folders, but still I can change themes without a problem through the Lab interface. Where are the JupyterLab theme .css files located? Or how can I find them? I've searched for css files and the themes sub folder seems to be the only location for them. I can't seem to find any in my user directory either c:\Users\User\.jupyter where the .css files were for Jupyter Notebook were located. Thanks!
jupyterlab - change styling - font, font size
0
0.158649
1
0
0
36,582
47,057,011
2017-11-01T14:18:00.000
1
0
1
0
0
python,django,visual-studio-code
0
47,057,170
0
2
0
false
1
0
It also happened to me, i tried creating a django project using django-admin.py startproject example, I asked around and i found out that the django-admin.py does not work on vscode for windows (am not really sure about mac), vscode sees it as a file and not as a command, cause vscode doesnt need the .py extension to execute the command.
1
0
0
0
I don't know when and how it started but now I have such a glitch: open CMD enter python command: "django-admin.py help" Visual Studio Code starts up and opens manage.py for editing. The CMD command itself does not return anything. on the other hand, if I enter: "django-admin help" (without .py) the CMD shows help and VSCODE does not react in any way. What is this magic? How to change VSCODE reaction to .py mentioning?
VSCode starts up when I use ".py" extension with CMD commands
0
0.099668
1
0
0
93
47,079,459
2017-11-02T15:50:00.000
7
1
0
1
0
python,docker
0
47,079,624
0
1
0
true
0
0
In your Dockerfile, either of these should work: Use the ENV instruction (ENV PYTHONPATH="/:$PYTHONPATH") Use a prefix during the RUN instruction (RUN export PYTHONPATH=/:$PYTHONPATH && <do something>) The former will persist the changes across layers. The latter will take effect in that layer/RUN command
1
2
0
0
Using dockerbuild file, how can I do there something like: export PYTHONPATH=/:$PYTHONPATH using RUN directive or other option
Adding path to pythonpath in dockerbuild file
0
1.2
1
0
0
2,449
47,080,385
2017-11-02T16:39:00.000
3
1
1
0
0
python,list,append,psychopy,del
0
47,080,508
0
1
1
false
0
0
I would take a slightly different approach: I would wrap the items you're inserting into the list with a thin object that has a timestamp field. Then I'd just leave it there, and when you iterate the list to find an object to pop - check the timestamp first and if it's bigger than 10 seconds, discard it. Do it iteratively until you find the next element that is younger than 10 seconds and use it for your needs. Implementing this approach should be considerably simpler than triggering events based on time and making sure they run accurately and etc.
1
1
0
0
I'm building an experiment in Psychopy in which, depending on the participants response, I append an element to a list. I'd need to remove/pop/del it after a specific amount of time has passed after it was appended (e.g. 10 seconds). I was considering creating a clock to each element added, but as I need to give a name to each clock and the number of elements created is unpredictable (dependent on the participants responses), I think I'd have to create names to each of the clocks created on the go. However, I don't know how to do that and, on my searches about this, people usually say this isn't a good idea. Would anyone see a solution to the issue: remove/pop/del after a specific time has passed after appending the element? Best, Felipe
Append an element to a list and del/pop/remove it after a a specific amount of time has passed since it was appended
0
0.53705
1
0
0
95
47,082,736
2017-11-02T18:58:00.000
0
0
1
1
0
python,macos,homebrew
0
47,084,214
0
2
0
false
0
0
I agree to using virtualenv, it allows you to manage different python versions separately for different projects and clients. This basically allows you to have each project it's own dependencies which are isolated from others.
2
0
0
0
What's the best way to manage multiple Python installations (long-term) if I've already installed Python 3 via brew? In the past Python versions were installed here, there, and everywhere, because I used different tools to install various updates. As you can imagine, this eventually became a problem. I once was in a situation where a package used in one of my projects only worked with Python 3.4, but I had recently updated to 3.6. My code no longer ran, and I had to scour the system for Python 3.4 to actually fire up the project. It was a huge PITA. I recently wiped my computer and would like to avoid some of my past mistakes. Perhaps this is naïve, but I'd like to limit version installation to brew. (Unless that's non-sensical — I'm open to other suggestions!) Furthermore, I'd like to know how to resolve my past version management woes (i.e. situations like the one above). I've heard of pyenv, but would that conflict with brew? Thanks!
Managing multiple Python versions on OSX
0
0
1
0
0
432
47,082,736
2017-11-02T18:58:00.000
2
0
1
1
0
python,macos,homebrew
0
47,083,151
0
2
0
true
0
0
Use virtualenvs to reduce package clash between independent projects. After activating the venv use pip to install packages. This way each project has an independent view of the package space. I use brew to install both Python 2.7 and 3.6. The venv utility from each of these will build a 2 or 3 venv respectively. I also have pyenv installed from brew which I use if I want a specific version that is not the latest in brew. After activating a specific version in a directory, I will then create a venv and use this to manage the package isolation. I can't really say what is best. Let's see what other folks say.
2
0
0
0
What's the best way to manage multiple Python installations (long-term) if I've already installed Python 3 via brew? In the past Python versions were installed here, there, and everywhere, because I used different tools to install various updates. As you can imagine, this eventually became a problem. I once was in a situation where a package used in one of my projects only worked with Python 3.4, but I had recently updated to 3.6. My code no longer ran, and I had to scour the system for Python 3.4 to actually fire up the project. It was a huge PITA. I recently wiped my computer and would like to avoid some of my past mistakes. Perhaps this is naïve, but I'd like to limit version installation to brew. (Unless that's non-sensical — I'm open to other suggestions!) Furthermore, I'd like to know how to resolve my past version management woes (i.e. situations like the one above). I've heard of pyenv, but would that conflict with brew? Thanks!
Managing multiple Python versions on OSX
0
1.2
1
0
0
432
47,096,120
2017-11-03T12:44:00.000
1
0
1
0
0
python,dll,source-code-protection
0
51,919,783
0
4
0
false
0
0
One other option, of course, is to expose the functionality over the web, so that the user can interact through the browser without ever having access to the actual code.
1
1
0
0
I have written a python code which takes an input data file, performs some processing on the data and writes another data file as output. I should distribute my code now but the users should not see the source code but be able to just giving the input and getting the output! I have never done this before. I would appreciate any advice on how to achieve this in the easiest way. Thanks a lot in advance
How to protect my Python code before distribution?
0
0.049958
1
0
0
11,558
47,102,090
2017-11-03T18:10:00.000
0
0
0
0
0
python,django,xml,django-rest-framework
0
47,103,539
0
1
0
false
1
0
You don't really do this in view part of Django. What you should do is take the json, find the uri, get the content of uri through urllib, requests etc, get the relevant content from the response, add a new field to the json and then pass it to your view.
1
0
0
0
I have a django view that takes in a json object and from that object I am able to get a uri. The uri contains an xml object. What I want to do is get the data from the xml object but I am not sure how to do this. I'm using django rest, which I am fairly inexperienced in using, but I do not know the uri until I search the json object in the view. I have tried parsing it in the template but ran into CORS issues amongst others. Any ideas on how this could be done in the view? My main issue is not so much parsing the xml but how to get around the CORS issue which I have no experience in dealing with
Getting data from a uri django
0
0
1
0
1
68
47,104,930
2017-11-03T21:56:00.000
0
0
0
1
0
google-cloud-platform,google-cloud-storage,google-cloud-python
1
65,637,184
0
2
0
false
1
0
The solution for me was that both google-cloud-storage and pkg_resources need to be in the same directory. It sounds like your google-cloud-storage is in venv and your pkg_resources is in the lib folder
1
7
0
0
As in the title, when running the appserver I get a DistributionNotFound exception for google-cloud-storage: File "/home/[me]/Desktop/apollo/lib/pkg_resources/init.py", line 867, in resolve raise DistributionNotFound(req, requirers) DistributionNotFound: The 'google-cloud-storage' distribution was not found and is required by the application Running pip show google-cloud-storage finds it just fine, in the site packages dir of my venv. Everything seems to be in order with python -c "import sys; print('\n'.join(sys.path))" too; the cloud SDK dir is in there too, if that matters. Not sure what to do next.
google-cloud-storage distribution not found despite being installed in venv
1
0
1
0
0
1,810
47,109,343
2017-11-04T09:36:00.000
0
0
1
0
1
python,tensorflow,anaconda,spyder,code-completion
0
47,526,695
0
2
0
false
0
0
Now that I have to use a temporary alternative, I installed anaconda version without an installed tensorflow in anaconda's envs. And I use it when I don't use tensorflow. I hope this question can be complement answered, please attent my answer.
2
0
1
0
I am data scientist in beijing and working with anaconda in win7 but after I pip installed tensorflow v1.4,code completion of my IDE spyder in anaconda not work, before anything of code completion function is work perfectly. Now even I uninstall tensorflow,code completion function of spyder still not work. Any help? my envirment: win7 anaconda3 v5.0 for win64 (py3.6) tensorflow v1.4 for win (tf_nightly-1.4.0.dev20171006-cp36-cp36m-win_amd64.whl) So two question: 1 How can i fix it so to make anaconda3 spyder code completion work again? 2 After uninstall tensorflow, anaconda3 spyder code completion still not work, what can I do?
how can i make anaconda spyder code completion work again after installing tensorflow
0
0
1
0
0
487
47,109,343
2017-11-04T09:36:00.000
0
0
1
0
1
python,tensorflow,anaconda,spyder,code-completion
0
47,525,945
0
2
0
false
0
0
I try pip rope_py3k、jedi and readline, and reset the set of tool, but all are not useful. and my Spyder code editing area also can not be automatically completed after the installation of tensorflow, I have re-installed again and found the same problem. However,when I re-installed all envs except tensorflow,it can work!! My environment is win10, anaconda3.5, python3.6.3, tensorflow1.4. Did you resolve it?And I hope you can teach me.
2
0
1
0
I am data scientist in beijing and working with anaconda in win7 but after I pip installed tensorflow v1.4,code completion of my IDE spyder in anaconda not work, before anything of code completion function is work perfectly. Now even I uninstall tensorflow,code completion function of spyder still not work. Any help? my envirment: win7 anaconda3 v5.0 for win64 (py3.6) tensorflow v1.4 for win (tf_nightly-1.4.0.dev20171006-cp36-cp36m-win_amd64.whl) So two question: 1 How can i fix it so to make anaconda3 spyder code completion work again? 2 After uninstall tensorflow, anaconda3 spyder code completion still not work, what can I do?
how can i make anaconda spyder code completion work again after installing tensorflow
0
0
1
0
0
487
47,116,912
2017-11-05T00:06:00.000
3
0
0
0
0
python,mysql,flask
1
47,117,043
0
2
0
true
0
0
flask.ext. is a deprecated pattern which was used prevalently in older extensions and tutorials. The warning is telling you to replace it with the direct import, which it guesses to be flask_mysql. However, Flask-MySQL is using an even more outdated pattern, flaskext.. There is nothing you can do about that besides convincing the maintainer to release a new version that fixes it. from flaskext.mysql import MySQL should work and avoid the warning, although preferably the package would be updated to use flask_mysql instead.
1
4
0
0
When I run from flask.ext.mysql import MySQL I get the warning Importing flask.ext.mysql is deprecated, use flask_mysql instead. So I installed flask_mysql using pip install flask_mysql,installed it successfully but then when I run from flask_mysql import MySQL I get the error No module named flask_mysql. In the first warning I also get Detected extension named flaskext.mysql, please rename it to flask_mysql. The old form is deprecated. .format(x=modname), ExtDeprecationWarning. Could you please tell me how exactly should I rename it to flask_mysql? Thanks in advance.
Python flask.ext.mysql is deprecated?
0
1.2
1
1
0
3,318
47,147,414
2017-11-06T23:13:00.000
0
0
0
0
0
python,pandas
0
47,147,531
0
6
0
false
0
0
I believe that reduce( (lambda x, y: x & (df[y[0]]<y[1])), list_of_filters ) will do it.
1
4
1
0
I am comfortable with basic filtering and querying using Pandas. For example, if I have a dataframe called df I can do df[df['field1'] < 2] or df[df['field2'] < 3]. I can also chain multiple criteria together, for example: df[(df['field1'] < 3) & (df['field2'] < 2)]. What if I don't know in advance how many criteria I will need to use? Is there a way to "chain" an arbitrary number of these operations together? I would like to pass a list of filters such as [('field1', 3), ('field2', 2), ('field3', 4)] which would result in the chaining of these 3 conditions together. Thanks!
Filter Pandas Dataframe using an arbitrary number of conditions
0
0
1
0
0
813
47,148,516
2017-11-07T01:23:00.000
0
0
1
0
0
python,mongodb,datetime,pymongo
1
47,148,634
0
1
0
false
0
0
You are experiencing the defined behavior. MongoDB has a single datetime type (datetime). There are no separate, discrete types of just date or just time. Workarounds: Plenty, but food for thought: Storing just date is straightforward: assume Z time, use a time component of 00:00:00, and ignore the time offset upon retrieval. Storing just time is trickier but doable: establish a base date like the epoch and only vary the time component, and ignore the date component upon retrieval.
1
0
0
0
I want to insert date and time into mongo ,using pymongo. However, I can insert datetime but not just date or time . here is the example code : now = datetime.datetime.now() log_date = now.date() log_time = now.time() self.logs['test'].insert({'log_date_time': now, 'log_date':log_date, 'log_time':log_time}) it show errors : bson.errors.InvalidDocument: Cannot encode object: datetime.time(9, 12, 39, 535769) in fact , i don't know how to insert just date or time in mongo shell too. i know insert datetime is new Date(), but I just want the date or time filed.
questions about using pymongo to insert date and time into mongo
0
0
1
1
0
515
47,160,587
2017-11-07T14:39:00.000
-1
0
0
0
0
python,scrapy,scrapy-spider
0
61,992,037
0
1
0
false
1
0
Yes, it's possible to do what you're trying with Scrapy's LinkExtractor library. This will help you document the URLs for all of the pages on your site. Once this is done, you can iterate through the URLs and the source (HTML) for each page using the urllib Python library. Then you can use RegEx to find whatever patterns you're looking for within the HTML for each page in order to perform your analysis.
1
5
0
0
Is it possible to use Scrapy to generate a sitemap of a website including the URL of each page and its level/depth (the number of links I need to follow from the home page to get there)? The format of the sitemap doesn't have to be XML, it's just about the information. Furthermore I'd like to save the complete HTML source of the crawled pages for further analysis instead of scraping only certain elements from it. Could somebody experienced in using Scrapy tell me whether this is a possible/reasonable scenario for Scrapy and give me some hints on how to find instructions? So far I could only find far more complex scenarios but no approach for this seemingly simple problem. Addon for experienced webcrawlers: Given it is possible, do you think Scrapy is even the right tool for this? Or would it be easier to write my own crawler with a library like requests etc.?
Sitemap creation with Scrapy
1
-0.197375
1
0
1
1,378
47,166,301
2017-11-07T19:49:00.000
2
0
0
0
0
python,database,postgresql
0
47,166,411
0
2
0
false
0
0
When your script quits your connection will close and the server will clean it up accordingly. Likewise, it's often the case in garbage collected languages like Python that when you stop using the connection and it falls out of scope it will be closed and cleaned up. It is possible to write code that never releases these resources properly, that just perpetually creates new handles, something that can be problematic if you don't have something server-side that handles killing these after some period of idle time. Postgres doesn't do this by default, though it can be configured to, but MySQL does. In short Postgres will keep a database connection open until you kill it either explicitly, such as via a close call, or implicitly, such as the handle falling out of scope and being deleted by the garbage collector.
1
0
0
0
I wonder how does Postgres sever determine to close a DB connection, if I forgot at the Python source code side. Does the Postgres server send a ping to the source code? From my understanding, this is not possible.
How does Postges Server know to keep a database connection open
1
0.197375
1
1
0
41
47,169,033
2017-11-07T23:21:00.000
0
0
1
0
0
python,string,function,parameters,arguments
0
70,921,285
0
4
0
false
0
0
A parameter is the placeholder; an argument is what holds the place. Parameters are conceptual; arguments are actual. Parameters are the function-call signatures defined at compile-time; Arguments are the values passed at run-time. Mnemonic: "Pee" for Placeholder Parameters, "Aeigh" for Actual Arguments.
1
31
0
0
So I'm still pretty new to Python and I am still confused about using a parameter vs an argument. For example, how would I write a function that accepts a string as an argument?
Parameter vs Argument Python
0
0
1
0
0
28,483
47,173,204
2017-11-08T06:45:00.000
0
0
0
0
0
python,django,web-applications,connection-pool
0
47,176,595
0
1
0
false
1
0
Your understanding of how things work is wrong, unfortunately. The way Django runs is very much dependent on the way you are deploying it, but in almost all circumstances it does not load code or initiate globals on every request. Certainly, uWSGI does not behave that way; it runs a set of long-lived workers that persist across many requests. In effect, uWSGI is already a connection pool. In other words, you are trying to solve a problem that does not exist.
1
0
0
0
The purpose is to implement a pool like database connection pool in my web application. My application is write by Django. The problem is that every time a http request come, my code will be loaded and run through. So if I write some code to initiate a pool. These code will be run per http request. And the pool will be initiated per request. So it is meaningless. So how should I write this?
How to implement a connection pool in web application like django?
0
0
1
0
0
917
47,193,190
2017-11-09T03:02:00.000
1
0
0
1
1
python,cloud,publish-subscribe
1
51,531,222
0
1
0
false
0
0
Update your google-cloud-pubsub to the latest version. It should resolve the issue
1
2
0
0
After running sudo pip install google.cloud.pubsub I am running the following python code in ubtunu google compute engine: import google.cloud.pubsub_v1 I get the following error when importing this: ImportError: No module named pubsub_v1 attribute 'SubscriberClient' Can anyone tell me how to fix this?
AttributeError: 'module' object has no attribute 'SubscriberClient'
0
0.197375
1
0
0
558
47,231,950
2017-11-10T22:15:00.000
1
0
1
0
0
python,python-3.x
0
47,231,983
0
1
0
true
0
0
asyncio only runs one coroutine at a time and only switches at points you define, so race conditions aren't really a thing. Since you're not worried about race conditions, you're not really worried about locks (although technically you could still get into a deadlock situation if you have 2 coroutines that wake each other, but you'd have to try really hard to make that happen)
1
1
0
0
I'm new to asyncio and I was wondering how you prevent race conditions from occuring. I don't see implementation for locks - is there a different way this is handled?
Race Conditions with asyncio
1
1.2
1
0
0
704
47,252,394
2017-11-12T18:32:00.000
0
0
1
0
1
python,python-3.x,windows-subsystem-for-linux
1
48,129,500
0
1
0
true
0
0
Well, since no one is answering this question, I have to close the question. What I did to overcome the issue was just to uninstall the whole WSL and reinstall it.
1
2
0
0
I am using python 3.6.2 on WSL (Windows Linux subsystem) and trying to set tensorflow environment (and installing some other libraries as well). However, i always get an error when I exit and login again: ModuleNotFoundError: No module named 'tensorflow' So I have to reinstall the libraries again and the problem will be fixed until I logout again. This problem only happens with my python3. I also tried python3 and use import tensorflow to find the library, but it also returned the same error. I think the problem may be related with system path because python cannot find the library in its original searching directory. when i enter sys.path it returns: ['', '/home/jeoker/anaconda3/lib/python36.zip', '/home/jeoker/anaconda3/lib/python3.6', '/home/jeoker/anaconda3/lib/python3.6/lib-dynload', '/home/jeoker/anaconda3/lib/python3.6/site-packages'] But when I do conda list, the result always show the files in /home/jeoker/anaconda2. I tried sudo pip3 install tensorflow, but it gived me this: Requiement already satisfied. It seems that the path where the libraries are installed is not the same as where python is looking into. Does anyone know how can I fix this problem? thanks in advance!!
WSL python3 ModuleNotFoundError: No module named xxx
0
1.2
1
0
0
1,381
47,270,367
2017-11-13T17:42:00.000
0
0
1
0
0
python,file,networking
0
47,270,457
0
2
0
false
0
0
If you can remove the file without Python, then you can remove it with Python. Otherwise, the answer is "no".
2
0
0
0
Recently, I learnt how to write/delete/read a file in Python (I am a beginner). However, something has got me thinking: is it possible to write/delete/read a file in a different user account from the same network (i.e same ip address)? If so, how? Don't worry, I'll just try it at home. ;)
How can you write/delete/read a file in a different user account with the same ip address with python?
1
0
1
0
0
35
47,270,367
2017-11-13T17:42:00.000
0
0
1
0
0
python,file,networking
0
47,270,609
0
2
0
false
0
0
This is running into more of an OS question. The answer is if you have permission to do so then yes you can but if you do not have permissions over the file then you cannot.
2
0
0
0
Recently, I learnt how to write/delete/read a file in Python (I am a beginner). However, something has got me thinking: is it possible to write/delete/read a file in a different user account from the same network (i.e same ip address)? If so, how? Don't worry, I'll just try it at home. ;)
How can you write/delete/read a file in a different user account with the same ip address with python?
1
0
1
0
0
35
47,277,332
2017-11-14T03:49:00.000
0
0
0
0
0
python,opencv,opticalflow,cv2
0
47,689,288
0
1
0
false
0
0
Yes, it's possible. cv2.calcOpticalFlowPyrLK() will be the optical flow function you need. Before you make that function call, you will have to create an image mask. I did a similar project, but in C++, though I can outline the steps for you: Create an empty matrix with same width and height of your images Using the points from your ROI, create a shape out of it (I did mine using cv2.fillPoly()) and fill the inside of the shape with white (Your image mask should only be comprised of black and white color) If you are planning on using corners as features, then call cv2.goodFeaturesToTrack() and pass in the mask you've made as one of its arguments. If you're using the Feature2D module to detect features, you can use the same mask to only extract the features in that masked area. By this step, you should now have a collection of features/points that are only within the bounds of the shape! Call the optical flow function and then process the results. I hope that helps.
1
0
1
0
I am using OpenCV's Optical Flow module. I understand the examples in the documentation but those take the entire image and then get the optical flow over the image. I only want to pass it over some parts of an image. Is it possible to do that? If yes, how do I go about it? Thanks!
cv2 running optical flow on particular rectangles
1
0
1
0
0
279
47,289,424
2017-11-14T15:24:00.000
0
0
0
1
1
python,amazon-web-services,cygwin,aws-cli
1
47,595,992
0
2
0
false
0
0
Ok, so I spent ages trying to do this as well because I wanted to get setup on the fast.ai course. Nothing seemed to work. However, I uninstalled Anaconda3 and installed Anaconda2 instead. That did the trick!
2
0
0
0
I am working on a Windows computer and have been using Git Bash up until now without a problem. However, Git Bash seems to be missing some commands that Cygwin can provide, so I switched to Cygwin. I need to use AWS CLI with Cygwin but any time I input any aws command, I get the following error: C:\users\myusername\appdata\local\programs\python\python36\python.exe: can't open file '/cygdrive/c/Users/myusername/AppData/Local/Programs/Python/Python36/Scripts/aws': [Errno 2] No such file or directory I've seen other questions about getting Cygwin working with AWS, but they seem to talk about AWS CLI being incompatible with Windows' Anaconda version of Python (which mine doesn't seem to be). Any thoughts on how to fix this? Thanks.
AWS CLI not working in Cygwin
0
0
1
0
0
942
47,289,424
2017-11-14T15:24:00.000
0
0
0
1
1
python,amazon-web-services,cygwin,aws-cli
1
47,312,502
0
2
0
false
0
0
You are mixing cygwin posix path with with a not cygwin Python. C:\users\myusername\appdata\local\programs\python\python36\python.exe Is not the cygwin python so it can't open the file as /cygdrive/c/Users/myusername/AppData/Local/Programs/Python/Python36/Scripts/aws is not a windows path that it can understand . Only Cygwin programs understand it. Two possible solutions: 1 Use a windows path 2 Use a cygwin Python
2
0
0
0
I am working on a Windows computer and have been using Git Bash up until now without a problem. However, Git Bash seems to be missing some commands that Cygwin can provide, so I switched to Cygwin. I need to use AWS CLI with Cygwin but any time I input any aws command, I get the following error: C:\users\myusername\appdata\local\programs\python\python36\python.exe: can't open file '/cygdrive/c/Users/myusername/AppData/Local/Programs/Python/Python36/Scripts/aws': [Errno 2] No such file or directory I've seen other questions about getting Cygwin working with AWS, but they seem to talk about AWS CLI being incompatible with Windows' Anaconda version of Python (which mine doesn't seem to be). Any thoughts on how to fix this? Thanks.
AWS CLI not working in Cygwin
0
0
1
0
0
942
47,290,296
2017-11-14T16:09:00.000
1
0
1
0
0
python,python-3.x,path,anaconda,conda
0
47,290,366
0
1
0
true
0
0
C:\ProgramData\Anaconda3;C:\ProgramData\Anaconda3\Scripts;C:\ProgramData\Anaconda3\Library\bin; These three settings should be set automatically in the path folder by Anaconda? If not done automatically, put them at the very beginning of the list.
1
2
0
0
I have anaconda 3 with python 3 that works perfectly in anaconda prompt but I want to make anaconda python my default python. file names 'python.exe' is located at 'C:\ProgramData\Anaconda3\python.exe' but when I go to PATH Variables there's no file named python in Anaconda3 folder. Any suggestions on how to do it?
How can I set Anaconda3 python my default on windows
0
1.2
1
0
0
4,281
47,299,415
2017-11-15T04:34:00.000
2
0
0
0
0
python,faker
0
51,228,688
0
1
0
true
0
0
I had this same issue and looked more into it. In the en_US provider there about 1000 last names and 750 first names for about 750000 unique combos. If you randomly select a first and last name, there is a chance you'll get duplicates. But in reality, that's how the real world works, there are many John Smiths and Robert Doyles out there. There are 7203 first names and 473 last names in the en profile which can kind of help. Faker chooses the combo of first name and last name meaning there are about 7203 * 473 = 3407019. But still, there is a chance you'll get duplicates. I solve this problem by adding numbers to names. I need to generate huge dataset. Keep in mind that in reality, any huge dataset of names will have duplicates. I work with large datasets (> 1 million names) and we see a ton of duplicate first and last names. If you read the faker package code, you can probably figure out how to modify it so you get all 3M distinct names.
1
6
0
0
I have used Python Faker for generating fake data. But I need to know what is the maximum number of distinct fake data (eg: fake names) can be generated using faker (eg: fake.name() ). I have generated 100,000 fake names and I got less than 76,000 distinct names. I need to know the maximum limit so that I can know how much we can scale using this package for generating data. I need to generate huge dataset. I also want to know is Php faker, perl faker are all same for different environments? Other packages for generating huge dataset will be highly appreciated.
Maximum Limit of distinct fake data using Python Faker package
1
1.2
1
0
0
1,947
47,301,667
2017-11-15T07:28:00.000
1
0
1
0
0
python,python-3.x,python-2.7
0
47,301,713
0
1
0
true
0
0
Here, int/int will return int only that is what happening here 22/7 gives 3 and you're type casting it to float(3) which is giving 3.0 but if you will perform float/int or int/float then it will result into float so you convert any of them to float as shown following. replace float(22/7) with float(22)/7
1
0
0
0
how to make output 22//7 on python 2.7 become 3.14159 ?i try use float(22/7) but it just give me 3.0. I try use Decimal but it just give me 3, use round(x, 6) only give 3.0 just like float.
different result 22/7 on python 2.7.12 and 3.6
0
1.2
1
0
0
780
47,324,511
2017-11-16T08:07:00.000
0
0
0
0
0
c#,python,cntk
0
47,371,621
0
3
0
false
0
0
Checked that CNTKLib is providing those learners in CPUOnly package. Nestrov is missing in there but present in python. There is a difference while creating the trainer object with CNTKLib learner function vs Learner class. If a learner class is used, net parameters are provided as a IList. This can be obtained using netout.parameter() ; If CNTKLib is used, parameters are provided as ParameterVector. Build ParameterVector while building the network. and provide it while creating Trainer object. ParameterVector pv = new ParameterVector () pv.Add(weightParameter) pv.Add(biasParameter) Thanks everyone for your answers.
2
0
1
0
I am using C# CNTK 2.2.0 API for training. I have installed Nuget package CNTK.CPUOnly and CNTK.GPU. I am looking for following learners in C#. 1. AdaDelta 2. Adam 3. AdaGrad 4. Neterov Looks like Python supports these learners but C# package is not showing them. I can see only SGD and SGDMomentun learners in C# there. Any thoughts, how to get and set other learners in C#. Do I need to install any additional package to get these learners? Appreciate your help.
Learners in CNTK C# API
0
0
1
0
0
447
47,324,511
2017-11-16T08:07:00.000
0
0
0
0
0
c#,python,cntk
0
47,324,718
0
3
0
false
0
0
Download the NCCL 2 app to configure in c# www.nvidia. com or google NCCL download
2
0
1
0
I am using C# CNTK 2.2.0 API for training. I have installed Nuget package CNTK.CPUOnly and CNTK.GPU. I am looking for following learners in C#. 1. AdaDelta 2. Adam 3. AdaGrad 4. Neterov Looks like Python supports these learners but C# package is not showing them. I can see only SGD and SGDMomentun learners in C# there. Any thoughts, how to get and set other learners in C#. Do I need to install any additional package to get these learners? Appreciate your help.
Learners in CNTK C# API
0
0
1
0
0
447