Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
40,732,698 | 2016-11-22T02:07:00.000 | 1 | 1 | 0 | 0 | python,sftp,google-cloud-dataflow | 40,748,038 | 1 | false | 0 | 0 | I don't believe there is an existing sink that supports SFTP, so you would need to look at writing a custom sink, possibly based on one of the existing file-based sinks such as TextIO.Write. | 1 | 1 | 0 | My pipeline have to write into SFTP as text output.
I wonder if writing custom sink is the only option for it?
Is there any other option? (For instance:extending 'TextIO.Write'....) | Have someone an experience with write output into SFTP by Data Flow Python SDK? | 0.197375 | 0 | 0 | 228 |
40,732,906 | 2016-11-22T02:35:00.000 | 0 | 0 | 0 | 0 | python,html,selenium,beautifulsoup | 40,733,402 | 1 | false | 1 | 0 | It depends. If the data is already loaded when the page loads, then the data is available to scrape, it's just in a different element, or being hidden. If the click event triggers loading of the data in some way, then no, you will need Selenium or another headless browser to automate this.
Beautiful soup is only an HTML parser, so whatever data you get by requesting the page is the only data that beautiful soup can access. | 1 | 4 | 0 | I am currently begining to use beautifulsoup to scrape websites, I think I got the basics even though I lack theoretical knowledge about webpages, I will do my best to formulate my question.
What I mean with dynamical webpage is the following: a site whose HTML changes based on user action, in my case its collapsible tables.
I want to obtain the data inside some "div" tag but when you load the page, the data seems unavalible in the html code, when you click on the table it expands, and the "class" of this "div" changes from something like "something blabla collapsible" to "something blabla collapsible active" and this I can scrape with my knowledge.
Can I get this data using beautifulsoup? In case I can't, I thought of using something like selenium to click on all the tables and then download the html, which I could scrape, is there an easier way?
Thank you very much. | Is it possible to scrape a "dynamical webpage" with beautifulsoup? | 0 | 0 | 1 | 278 |
40,736,440 | 2016-11-22T07:51:00.000 | 1 | 0 | 0 | 0 | python,tensorflow | 41,664,553 | 3 | false | 0 | 0 | the major difference between sigmoid and softmax is that softmax function return result in terms of probability which is kind of more inline with the ML philosophy. Sum of all outputs from softmax result to 1. This is turn tells you how confident the network is about the answer.
Whereas, sigmoid outputs are discreet. Its either correct or incorrect. You would have to write code to calculate the probability yourself.
As far as performance of the network goes. Softmax generally gives better accuracy than sigmoid. But, it also highly dependent on other hyper parameters also. | 1 | 4 | 1 | I recently came across tensorflow softmax_cross_entropy_with_logits, but I can not figure out what the difference on the implementation is compared to sigmoid_cross_entropy_with_logits. | Difference of implementation between tensorflow softmax_cross_entropy_with_logits and sigmoid_cross_entropy_with_logits | 0.066568 | 0 | 0 | 1,855 |
40,736,600 | 2016-11-22T08:02:00.000 | 1 | 0 | 1 | 0 | javascript,python,node.js | 40,736,699 | 1 | true | 0 | 0 | 1) regexp is advanced string operations. For every character encountered while parsing, it has to match with each token in the entire regexp string. The complexity is a non-linear function of the length of source and the length of the regexp string.
2) whereas string tokenizer (split) on single char, the task is clearly cut out, as you sequentially traverse the source string, 'cut' and tokenize the word when encountered the pattern char, and move forward. The complexity is as good as order of n, where n is the number of chars in the string.
3) is actually a variant of (2), but with more chars in the splitter. So in case if the first char matches, there is additional work involved to match the subsequent chars etc. So the complexity increases, and move towards regexp. The performance is still better than regexp, as the regexp require further interpretation of its own tokens.
Hope this helps. | 1 | 1 | 0 | I want to make sure what is the fast in node to parse a url query value. For example, hello.org/post.html?action=newthread&fid=32&fpage=1, if I want to get fid value, I have 3 choice:
1 str.match(/[?/&]fid=(.*?)($|[&#])/)
2 req.query.fid in express, which I found is actually calling https://github.com/ljharb/qs/blob/master/lib/parse.js, which is I found is using str.split('&') in behind
3 str.split('/[&/#?]/') and then use for loop to determine which is start with fid
I'm guessing 1st is the slowest, and the 2nd is the fast. But I don't know if it's correct (though I can make a test), but I do want to know some deep reason, thanks. | Comparing speed of regexp and split when parsing url query | 1.2 | 0 | 1 | 48 |
40,737,870 | 2016-11-22T09:17:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,keras,conv-neural-network | 40,791,232 | 2 | false | 0 | 0 | @Neal: Thank you for the prompt answer! You were right, I probably need to better explain my task. My work is somehow similar to classifying video sequences, but my data is saved in a database. I want my code to follow this steps for one epoch:
For i in (number_of_sequences):
Get N, the number of frames in the sequence i (I think that’s
equivalent to batch_size, the number of N of each sequence is
already saved in a list)
Fetch N successive frames from my database and their labels:
X_train, y_train
For j in range(number_of_rotation): -
Perform (the same) data Augmentation on all frames of the sequence (probably using datagen = ImageDataGenerator() and datagen.flow())
Train the network on X, y
My first thought was using model.fit_generator(generator = ImageDataGenerator().flow()) but this way, I can not modify my batch_size, and honestly I did not understand your solution.
Sorry for the long post, but I’m still a novice in both python and NN, but I’m really a big fan of Keras ;)
Thnx! | 1 | 2 | 1 | I have please two questions concerning the ImageDataGenerator:
1) Are the same augmentations used on the whole batch or each image gets its own random transformation?
e.g. for rotation, does the module rotates all the images in the batch with same angle or each image get a random rotation angle ?
2) The data in ImageDataGenerator.flow is looped over (in batches) indefinitely. Is there a way to stop this infinite loop, i.e. doing the augmentation only for n number of time. Because I need to modify the batch_size in each step (not each epoch).
Thanks | Augementations in Keras ImageDataGenerator | 0 | 0 | 0 | 2,135 |
40,740,979 | 2016-11-22T11:43:00.000 | 0 | 0 | 0 | 1 | python | 43,085,576 | 2 | false | 0 | 0 | I fixed this very same problem as follows:
I removed models/menu.py, and then the problem appears to be solved.
=> Remeber delete the models/menu.py | 1 | 0 | 0 | I am working on web2py image blog.
I am unable to understand these errors:
Internal error
Ticket issued: images/127.0.0.1.2016-11-22.16-44-39.95144250-6a2e-4648-83f9-aa55873e6ae8
Error ticket for "images"
Ticket ID
127.0.0.1.2016-11-22.16-44-39.95144250-6a2e-4648-83f9-aa55873e6ae8
type 'exceptions.NameError'> name 'myconf' is not defined
Version
web2py™ Version 2.14.6-stable+timestamp.2016.05.10.00.21.47
Python Python 2.7.12: /usr/bin/python (prefix: /usr)
Traceback (most recent call last):
File "/home/sonu/Software/web2py /gluon/restricted.py", line 227, in restricted
exec ccode in environment
File "/home/sonu/Software/web2py /applications/images/models/menu.py", line 17, in
response.meta.author = myconf.get('app.author')
NameError: name 'myconf' is not defined | NameError: name 'myconf' is not defined | 0 | 0 | 0 | 845 |
40,742,172 | 2016-11-22T12:41:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,scikit-learn,random-forest,data-science | 40,745,346 | 2 | true | 0 | 0 | If you can, you may simply merge the two datasets and perform GridSearchCV, this ensures the generalization ability to the other dataset. If you are talking about generalization to future unknown dataset, then this might not work, because there isn't a perfect dataset from which we can train a perfect model. | 2 | 0 | 1 | I would like to find the best parameters for a RandomForest classifier (with scikit-learn) in a way that it generalises well to other datasets (which may not be iid).
I was thinking doing grid search using the whole training dataset while evaluating the scoring function on other datasets.
Is there an easy to do this in python/scikit-learn? | How to do GridSearchCV with train and test being different datasets? | 1.2 | 0 | 0 | 1,071 |
40,742,172 | 2016-11-22T12:41:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,scikit-learn,random-forest,data-science | 40,744,270 | 2 | false | 0 | 0 | I don't think you can evaluate on a different data set. The whole idea behind GridSearchCV is that it splits your training set into n folds, trains on n-1 of those folds and evaluates on the remaining one, repeating the procedure until every fold has been "the odd one out". This keeps you from having to set apart a specific validation set and you can simply use a training and a testing set. | 2 | 0 | 1 | I would like to find the best parameters for a RandomForest classifier (with scikit-learn) in a way that it generalises well to other datasets (which may not be iid).
I was thinking doing grid search using the whole training dataset while evaluating the scoring function on other datasets.
Is there an easy to do this in python/scikit-learn? | How to do GridSearchCV with train and test being different datasets? | 0.197375 | 0 | 0 | 1,071 |
40,744,501 | 2016-11-22T14:30:00.000 | 0 | 0 | 0 | 0 | python,algorithm,math | 40,745,068 | 2 | false | 0 | 0 | I've done this before for Ramachandran plots. I will check but I seem to remember the algorithm I used was:
Assume that your Euclidian area has an origin at (0,0)
Shift all the points in a set so that they are fully within the area
Calculate the centroid
Shift the centroid back by the reverse of the original
So, say your points are split across the X axis, you find the minimum X-coordinate of the point set that is larger than the midpoint axis of the area. Then find the distance d to the edge on the Y axis. Then shift the entire cluster by d. | 1 | 1 | 1 | I have a flat Euclidean rectangular surface but when a point moves to the right boundary, it will appear at the left boundary (at the same y value). And visa versa for moving across the top and bottom boundaries. I'd like to be able to calculate the centroid of a set of points. The set of points in question are mostly clumped together.
I know I can calculate the centroid for a set of points by averaging all the x and y values. How would I do it in a wrap-around map? | Centroid of a set of positions on a toroidally wrapped (x- and y- wrapping) 2D array? | 0 | 0 | 0 | 116 |
40,747,679 | 2016-11-22T17:06:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,deep-learning,keras | 60,201,018 | 4 | false | 0 | 0 | Just a remark : In fact you have both predict and predict_proba in most classifiers (in Scikit for example). As already mentioned, the first one predicts the class, the second one provides probabilities for each class, classified in ascending order. | 2 | 26 | 1 | I found model.predict and model.predict_proba both give an identical 2D matrix representing probabilities at each categories for each row.
What is the difference of the two functions? | keras: what is the difference between model.predict and model.predict_proba | 0.099668 | 0 | 0 | 47,190 |
40,747,679 | 2016-11-22T17:06:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,deep-learning,keras | 68,835,517 | 4 | false | 0 | 0 | In the recent version of keras e.g. 2.6.0, predict and predict_proba is same i.e. both give probabilities. To get the class labels use predict_classes. The documentation is not updated | 2 | 26 | 1 | I found model.predict and model.predict_proba both give an identical 2D matrix representing probabilities at each categories for each row.
What is the difference of the two functions? | keras: what is the difference between model.predict and model.predict_proba | 0 | 0 | 0 | 47,190 |
40,750,049 | 2016-11-22T19:25:00.000 | 5 | 0 | 0 | 0 | python,selenium,firefox,server,phantomjs | 40,750,148 | 2 | true | 0 | 0 | Selenium uses a real web browser (typically Firefox or Chrome) to make its requests, so the website probably has no idea that you're using Selenium behind the scenes.
If the website is blocking you, it's probably because of your usage patterns (i.e. you're clogging up their web server by making 1000 requests every minute. That's rude. Don't do that!)
One exception would be if you're using Selenium in "headless" mode with the HtmlUnitDriver. The website can detect that. | 2 | 3 | 0 | So I'm trying to web crawl clothing websites to build a list of great deals/products to look out for, but I notice that some of the websites that I try to load, don't. How are websites able to block selenium webdriver http requests? Do they look at the header or something. Can you give me a step by step of how selenium webdriver sends requests and how the server receives them/ are able to block them? | Some websites block selenium webdriver, how does this work? | 1.2 | 0 | 1 | 7,264 |
40,750,049 | 2016-11-22T19:25:00.000 | 0 | 0 | 0 | 0 | python,selenium,firefox,server,phantomjs | 51,425,585 | 2 | false | 0 | 0 | It's very likely that the website is blocking you due to your AWS IP.
Not only that tells the website that somebody is likely programmatically scraping them, but most websites have a limited number of queries they will accept from any 1 IP address.
You most likely need a proxy service to pipe your requests through. | 2 | 3 | 0 | So I'm trying to web crawl clothing websites to build a list of great deals/products to look out for, but I notice that some of the websites that I try to load, don't. How are websites able to block selenium webdriver http requests? Do they look at the header or something. Can you give me a step by step of how selenium webdriver sends requests and how the server receives them/ are able to block them? | Some websites block selenium webdriver, how does this work? | 0 | 0 | 1 | 7,264 |
40,750,130 | 2016-11-22T19:30:00.000 | 2 | 0 | 0 | 0 | python,django,django-migrations | 40,750,681 | 1 | true | 1 | 0 | A custom exception will give a slightly prettier message. On the other hand, migrations are usually one-off with basically no code dependencies with each other or the rest of the project, meaning you cannot reuse exception classes to ensure that each migration stands alone. The only purpose of any exception here is to provide immediate feedback since it will not be caught by anything (except possibly error logging in production environments... Where it really shouldn't be failing).
I think the disadvantages outweigh the advantages - just raise a plain one-off Exception. | 1 | 0 | 0 | I have a datamigration I actually want to roll back if certain condition happen.
I know that migrations are automatically enclosed in a transaction, so I am safe just raising an exception, and then trust all changes to be rolled back. But which exception should I raise to abort my Django data migration?
Should I write my own exception, or am I fine with raise Exception('My message explaining the problem')?
What is best practice? | How to abort Django data migration? | 1.2 | 0 | 0 | 621 |
40,750,670 | 2016-11-22T20:05:00.000 | 16 | 0 | 0 | 0 | python,pandas | 40,750,775 | 1 | true | 0 | 0 | Use the parameter dtype=object for Pandas to keep the data as such at load time. | 1 | 8 | 1 | How to prevent pandas.read_csv() from inferring the data types. For example, its converting strings true and false to Bool: True and False. The columns are many for many files, therefore not feasible to do:
df['field_name'] = df['field_name'].astype(np.float64) for each of the columns in each file. I prefer pandas to just read file as it is, no type inferring. | Prevent pandas.read_csv from inferring dtypes | 1.2 | 0 | 0 | 3,832 |
40,751,579 | 2016-11-22T21:02:00.000 | 1 | 0 | 1 | 0 | python-3.x | 59,529,811 | 1 | false | 0 | 0 | I also had the same issue yesterday and after searching through different sites, I was able to get mine working.
First of all, you need to check if the module you installed is for use with the Python2 interpreter.
You will need to either run the script with python, or (preferably) install the module for use with Python3. Using pip3 should work.
Also be sure to have Pylint installed in the editor/IDE you are working with. | 1 | 1 | 0 | Im using pycharm and trying to use Google Text to speech. When doing from gtts import gTTS, gTTS is not recognised, what am I doing wrong? I have installed gtts in pycharm. | Unable to import gTTS 'Unresolved Reference' in Python | 0.197375 | 0 | 0 | 336 |
40,753,137 | 2016-11-22T22:53:00.000 | 0 | 0 | 1 | 0 | python-3.x | 40,753,389 | 2 | false | 0 | 0 | def ter(s):
return s[3:-3]
name=['ACBBDBA','CCABACB','CABBCBB']
xx=[ter(s) for s in name]
z=xx
print(z)
output
['B', 'B', 'B']
I did the reverse I want to delete B in middle and keep the other parts from each element | 1 | 1 | 0 | suppose I have a list which calls name:
name=['ACCBCDB','CCABACB','CAABBCB']
I want to use python to remove middle B from each element in the list.
the output should display :
['ACCCDB','CCAACB','CAABCB'] | Remove middle B from each element in the list | 0 | 0 | 0 | 37 |
40,759,163 | 2016-11-23T08:17:00.000 | 1 | 0 | 1 | 1 | python,batch-file | 40,760,885 | 2 | true | 0 | 0 | I'm pretty sure that if I were still doing sysadmin on Windoze systems I would be replacing all but the simplest of BAT files with something -- anything! -- else. Because that particular command language is so awfully deficient in just about everything you need. (Exception handling? Block structured code? Procedures? Parsing text strings and filenames? )
Python is highly readable and "batteries included", so it has to be worthy of consideration as a Windows scripting language. However, Powershell may offer advantages with respect to interfacing to Windows components. It was designed with that in mind. With Python you'd be relying on extra modules which may or may not do all that you need, and might not be as well-supported as Python and its standard libraries. On the other hand, if it's currently done with BAT it's probably not that sophisticated!
The main argument against change is "if it ain't broke, don't fix it". So if the BAT scripts do all you need and no changes are needed, then don't make any. Not yet. (Sooner later a change will be needed, and that is the time to consider scrapping the BAT script and starting again). | 2 | 0 | 0 | I've inherited various task with moderately confusing batch files. I intend to rewrite them as Python scripts, if only so I can thoroughly see what they're doing, as can my successors.
I honestly don't see why anyone would have more than a startling simple (under five line, do a, b, c, d, e in order) batch file: proper logic deserves a more modern approach, surely? But am I missing something? Is there a real advantage in still using such an approach? If so, what is it?
Clarified in response to comments: Windows .bat fils, that check for existence of files in certain places and move them around and then invoke other programs based on what was found. I guess what I'm really asking is, is it normal good practice to still create batch files for this sort of thing, or is it more usual to follow a different approach? | In what circumstances are batch files the right approach? | 1.2 | 0 | 0 | 85 |
40,759,163 | 2016-11-23T08:17:00.000 | 2 | 0 | 1 | 1 | python,batch-file | 40,759,669 | 2 | false | 0 | 0 | I'm not sure this is the correct place for such question but anyway..
Batch files (with very slight reservations) can be ran on every windows machine since windows NT (where the cmd.exe was introduced). This is especially valuable when you have to deal with old machines running windows xp or 2003.(still) The most portable language between the windows machines.
Batch files are very fast. Especially compared to powershell.
Easier to call. While wsh by default is called with wscript.exe (which output is in a cumbersome pop up windows) and powershell need a policy parameters and by default double clicks won't work on powershell scripts.
I event don't think you need python except if you are aiming multi-platform scripting (i.e. macs or linux machines). Windows comes with other powerful enough languages like vbscript and javascript (since XP) . C# , Visual Basic and jscript.net (since Vista) and powershell (since windows 7). And even when you want multi platform scripts .net based languages are highly considerable as Miscrosoft already offers an official support for .net for unix. Though the best choices probably are powershell and c# as visual basic and jscript.net are in maintenance mode , though the jscript options (jscript and jscript.net) are based on javascript which in the moment is the more popular language and investing in it will worth it.
By the way all languages coming packed with the Windows by default can be elegantly wrapped into batch file. | 2 | 0 | 0 | I've inherited various task with moderately confusing batch files. I intend to rewrite them as Python scripts, if only so I can thoroughly see what they're doing, as can my successors.
I honestly don't see why anyone would have more than a startling simple (under five line, do a, b, c, d, e in order) batch file: proper logic deserves a more modern approach, surely? But am I missing something? Is there a real advantage in still using such an approach? If so, what is it?
Clarified in response to comments: Windows .bat fils, that check for existence of files in certain places and move them around and then invoke other programs based on what was found. I guess what I'm really asking is, is it normal good practice to still create batch files for this sort of thing, or is it more usual to follow a different approach? | In what circumstances are batch files the right approach? | 0.197375 | 0 | 0 | 85 |
40,759,257 | 2016-11-23T08:24:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,tflearn | 40,759,746 | 1 | true | 0 | 0 | First of all, you have to know what exactly is your training sample. I am not sure what you mean by "input", does one input means one sample? Or does one row in your input means one sample?
If one input means one sample, you are in some trouble, because almost all CNN (and almost any other machine learning algos) requires consistency in the shape of the data. Given that some sample have more data than others, it might be a solution to crop out the extra ones from which have more data, or to just ignore those with less rows (so as to maximize the data you use). A more complicated way would be to run a PCA over some of the samples which have more rows (and same number of rows), then use only the principal components for all samples, if possible.
If one row means one sample, you can just merge all the data into a big chunk and process it the usual way. You got it. | 1 | 0 | 1 | I am using tflearn for modeling CNN.
However, my data has different number of rows in each input (but same number of columns).
For example, I have 100 inputs.
The first input's dimension is 4*9 but the second and third have 1*9.
I do not sure how to feed and shape data by using input_data(). | tflearn: different number of rows of each input | 1.2 | 0 | 0 | 57 |
40,762,810 | 2016-11-23T11:08:00.000 | 0 | 0 | 0 | 0 | wxpython | 40,940,273 | 1 | false | 0 | 1 | Take a look at the FloatCanvas sample in the wxPython demo. There is also a sample for the OGL package there. Either, or both of those would likely be suitable for diagrams depending on your needs. | 1 | 0 | 0 | I want to build an app for making UML diagrams. For visibility problems, I wish I could move classes using the mouse.
So here is my problem :
Do you know how can I drag and drop a widget in a canvas using wxPython ?
Found some things about ogl, but sound strange to me ...
Thx guys | wxpython self made widget and drag and drop it in a canvas | 0 | 0 | 0 | 84 |
40,763,066 | 2016-11-23T11:19:00.000 | 0 | 1 | 0 | 0 | python,report,bdd,python-behave | 72,303,866 | 6 | false | 1 | 0 | To have the possibility of generaring execution reports in an easy way, we have implemented the following wrapper on top of Behave, called BehaveX, that not only generates reports in HTML format, but also in xml and json formats. It also allows us to execute tests in parallel and provides some additional features that simplify the implementation of agile practices: https://github.com/hrcorval/behavex | 1 | 7 | 0 | For Java there are external report generation tools like extent-report,testNG. The Junit produces the xml format output for individual feature file. To get a detailed report, I don't see an option or wide approach or solution within the Behave framework.
How to produce the reports in Behave, do any other tools or framework needs to be added for the report generation in Behave? | How to generate reports in Behave-Python? | 0 | 0 | 0 | 27,370 |
40,763,877 | 2016-11-23T11:55:00.000 | 2 | 0 | 1 | 0 | python,python-2.7 | 40,764,124 | 1 | true | 0 | 0 | It's caches all the way down!
Readline returns a string which might be buffered by:
The Python runtime,
The C standard library (stdio),
The CPU cache,
Memory, including virtual memory, which in bad cases could be disk!
Disk controller cache (only on servers),
Disk drive cache.
The dichotomy you propose has different answers depending on which level you're looking at. For a disk drive, there is no such thing as a "file" or "line," so it will always read a "block." Once a disk block (a few KB) is loaded into memory, it may as well sit there until the memory is needed for something else. And the C standard library usually buffers a few KB at a time as well.
So a single readline call is likely to do most of the required processing for a few lines and only return the first one to you.
Of course, Python strings are dynamically allocated, which means the object containing the line will need to be stored in memory too, and in the case of virtual memory, some of that could also be on disk! | 1 | 0 | 0 | When I am using readline function what happens under the hood?
The function reads one line from the HDD to a buffer in the RAM.
The function reads more than one line from the HDD to a buffer in the RAM.
Thanks | readline function: under the hood, python | 1.2 | 0 | 0 | 60 |
40,764,134 | 2016-11-23T12:06:00.000 | 0 | 0 | 1 | 1 | python,pip,virtualenv,sudo | 40,764,718 | 3 | false | 0 | 0 | Try adding /usr/local/share/python in /etc/launchd.conf and ~/.bashrc. This might resolve the issue you are facing. | 1 | 0 | 0 | Whenever I try running virtualenv, it returns command not found.
Per recommendations in other posts, I have tried installing virtualenv with both $ pip install virtualenv and $ sudo pip install virtualenv. I have uninstalled and tried again multiple times.
I think the issue is that I am using OSX, and pip is installing virtualenv in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages. As I understand, it should be installed in /usr/local/bin/.
How can I install virtualenv there? | Virtualenv: pip not installing Virtualenv in the correct directory | 0 | 0 | 0 | 1,861 |
40,766,909 | 2016-11-23T14:18:00.000 | 33 | 0 | 0 | 0 | python,matplotlib | 40,767,005 | 4 | true | 0 | 0 | Just decrease the opacity of the lines so that they are see-through. You can achieve that using the alpha variable. Example:
plt.plot(x, y, alpha=0.7)
Where alpha ranging from 0-1, with 0 being invisible. | 1 | 28 | 1 | Does anybody have a suggestion on what's the best way to present overlapping lines on a plot? I have a lot of them, and I had the idea of having full lines of different colors where they don't overlap, and having dashed lines where they do overlap so that all colors are visible and overlapping colors are seen.
But still, how do I that. | Suggestions to plot overlapping lines in matplotlib? | 1.2 | 0 | 0 | 35,477 |
40,768,575 | 2016-11-23T15:40:00.000 | 0 | 0 | 1 | 0 | python,algorithm,sorting | 40,768,770 | 2 | false | 0 | 0 | The built-in sort method requires that cmp imposes a total ordering. It doesn't work if the comparisons are inconsistent. If it returns that A < B one time it must always return that, and it must return that B > A if the arguments are reversed.
You can make your cmp implementation work if you introduce an arbitrary tiebreaker. If two elements don't have a defined order, make one up. You could return cmp(id(a), id(b)) for instance -- compare the objects by their arbitrary ID numbers. | 1 | 2 | 1 | I have a list of elements to sort and a comparison function cmp(x,y) which decides if x should appear before y or after y. The catch is that some elements do not have a defined order. The cmp function returns "don't care".
Example: Input: [A,B,C,D], and C > D, B > D. Output: many correct answers, e.g. [D,C,B,A] or [A,D,B,C]. All I need is one output from all possible outputs..
I was not able to use the Python's sort for this and my solution is the old-fashioned bubble-sort to start with an empty list and insert one element at a time to the right place to keep the list sorted all the time.
Is it possible to use the built-in sort/sorted function for this purpose? What would be the key? | How to sort with incomplete ordering? | 0 | 0 | 0 | 809 |
40,770,767 | 2016-11-23T17:25:00.000 | 4 | 0 | 1 | 0 | python,algorithm,computer-science | 40,770,869 | 1 | true | 0 | 0 | The cheap way to turn any recursive algorithm into an iterative algorithm is to take the recursive function, put it in a loop, and use your own stack. This eliminates the function call overhead and from saving any unneeded data on the stack. However, this is not usually the "best" approach ("best" depends on the problem and context.)
They way you've worded your problem, it sounds like the idea is to break the list into sublists, find the closest pair in each, and then take the closest pair out of those two results. To do this iteratively, I think a better way to approach this than the generic way mentioned above is to start the other way around: look at lists of size 3 (there are three pairs to look at) and work your way up from there. Note that lists of size 2 are trivial.
Lastly, if your coordinates are integers, they are in Z (a much smaller subset of R). | 1 | 1 | 1 | I am trying to create an algorithm using the divide-and-conquer approach but using an iterative algorithm (that is, no recursion).
I am confused as to how to approach the loops.
I need to break up my problems into smaller sub problems, until I hit a base case. I assume this is still true, but then I am not sure how I can (without recursion) use the smaller subproblems to solve the much bigger problem.
For example, I am trying to come up with an algorithm that will find the closest pair of points (in one-dimensional space - though I intend to generalize this on my own to higher dimensions). If I had a function closest_pair(L) where L is a list of integer co-ordinates in ℝ, how could I come up with a divide and conquer ITERATIVE algorithm that can solve this problem?
(Without loss of generality I am using Python) | Iterative Divide and Conquer algorithms | 1.2 | 0 | 0 | 2,428 |
40,771,946 | 2016-11-23T18:36:00.000 | 1 | 0 | 1 | 0 | python,python-3.x | 40,771,997 | 1 | true | 0 | 0 | I think you're looking for the built-in max () function - but check your assignment to find out if you're allowed to use it, sometimes the point of an assignment is to make you implement the feature by yourself. | 1 | 2 | 0 | I'm doing some homework for my python programming class and I just wanted to know if it was possible to set any variable c equal to the greater of a and b without using an if statement.
c = bigger(a, b)
Just a thought since swapping two variables in python is so easy (a, b = b, a) that this might also be.
All help appreciated :) | Set value to bigger variable | 1.2 | 0 | 0 | 427 |
40,776,231 | 2016-11-24T00:14:00.000 | 0 | 1 | 1 | 0 | php,python,apache,lamp,uwsgi | 40,776,697 | 1 | false | 1 | 0 | the problem of scaling is always the shared data . ie how your processes are going to communicate with each other so it's not a python (GIL) problem | 1 | 1 | 0 | On a traditional LAMP stack it's easy to stack up quite a few web sites one a single VPS and get very decent performance, with the VPS serving lots of concurrent requests, thanks to the web server using processes and threads making best use of multi cores cpu despite PHP (as python) being single threaded.
Is the management of processes and threads the same on a python web stack (uwsgi + ngnix) ? On such a properly configured python stack, is it possible to achieve same result as the LAMP stack and stack several sites on same VPS with good reliability and performance making best use of cpu resources ? Does the GIL make it any different ? | Python processes, threads as compared to PHPs for web hosting | 0 | 0 | 0 | 26 |
40,776,967 | 2016-11-24T02:01:00.000 | 1 | 0 | 0 | 0 | python,numpy,least-squares,data-science | 40,777,110 | 1 | false | 0 | 0 | No, you can't really calculate an initial guess.
You'll just have to make an educated guess, and that really depends on your data and model.
If the initial guess affects the fitting, there is likely something else going on; you're probably getting stuck in local minima. Your model may be too complex, or your data range may be so large that you run into floating point precision limits and the fitting algorithm can't detect any changes for parameter changes. The latter can often be avoided by normalizing your data (and model), or, for example, using log(-log) space instead of linear space.
Or avoid leastsq altogether, and use a different minimization method (which will likely be much slower, but may produce overall better and more consistent results), such as the Nelder-Mead amoebe method. | 1 | 0 | 1 | I have a dataset for which I need to fit the plot. I am using leastsq() for fitting. However, currently I need to give initial guess values manually which is really affecting the fitting. Is there any way to first calculate the initial guess values which I can pass in leastsq()? | How to find initial guess for leastsq function in Python? | 0.197375 | 0 | 0 | 481 |
40,781,351 | 2016-11-24T08:33:00.000 | -1 | 0 | 0 | 0 | python,scala,hadoop,apache-spark,rdd | 40,782,596 | 1 | false | 0 | 0 | There are some common operator like reducebykey,groupbykey that can cause to shuffle.You can visit spark offical website to learn more. | 1 | 0 | 1 | How to determine if a new transformation in spark causes the creation of a stage? is there a list of them ? | In spark, what kind of transformations necessit a shuffling (can imply a new stage)? | -0.197375 | 0 | 0 | 56 |
40,781,553 | 2016-11-24T08:44:00.000 | 1 | 0 | 0 | 0 | python,scipy | 40,781,778 | 1 | true | 0 | 0 | rv_continuous only handles univariate distributions. | 1 | 1 | 1 | How to use rv_continuous for multidimensional pdf?
Is rv_continuous able to handle multidimensional pdf?
My goal is to use the rvs method to generate random values from an arbitrary 2d distribution. Do you know any other solutions? | Generate random values given 2d pdf | 1.2 | 0 | 0 | 152 |
40,782,240 | 2016-11-24T09:18:00.000 | 0 | 0 | 0 | 0 | java,android,python,sockets | 40,782,291 | 3 | false | 0 | 0 | You can send ping packets to check if the server is alive. | 1 | 2 | 0 | I have a server program in python which sends and receives data packets to and from android client. I am using TCP with sockets in android client to communicate with python program.
Every thing is working fine but when the server is shutdown with an NSF power failure it disconnected from client.
My Question is how I check for the availability of the server all the time using android services or any BroadcastReceiver or inside my MainActivity when socket closed or disconnected.
I don't want to restart my client application again after server power failure. | How to reconnect with socket after server power failure from Android Client | 0 | 0 | 1 | 698 |
40,782,641 | 2016-11-24T09:36:00.000 | 0 | 0 | 0 | 0 | python-2.7,selenium-webdriver,selenium-chromedriver | 40,795,935 | 1 | false | 0 | 0 | Fixed. It was a compatibility error. Just needed to downloaded the latest chrome driver version and it worked. | 1 | 0 | 0 | Every time I open up Chrome driver in my python script, it says "chromedriver.exe has stopped working" and crashes my script with the error: [Errno 10054] An existing connection was forcibly closed by the remote host.
I read the other forum posts on this error, but I'm very new to this and a lot of it was jargon that I didn't understand. One said something about graceful termination, and one guy said "running the request again" solved his issue, but I have no idea how to do that. Can someone explain to me in more detail how to fix this? | [Errno 10054], selenium chromedriver crashing each time | 0 | 0 | 1 | 298 |
40,783,510 | 2016-11-24T10:11:00.000 | 0 | 0 | 1 | 1 | python,automation | 40,837,259 | 1 | false | 0 | 0 | For Command Line installations which does not involve user-interaction use a shell script or python code calling those shell commands.
For Command Line installations which involve user-interaction use expect scripts or python's pexpect which does the same thing.
Now, Wizard automation could be one using Robot Class Java or SendKeys Library in Python which generate keyboard events for you. To make it more foolproof you can track the installation Logs simultaneously or to keep track of errors, I would recommend you take a screenshot at each wizard screen which could help you debug later on.
Hope it helps! | 1 | 0 | 0 | Our company provide installers that needs to be installed on windows and linux machines.I have to automate this wherein these installers get installed by some script thus minimizing manual intervention.
I have come across python for this as it will be a generic solution for both windows and linux.AutoIt is specific to windows so ignoring it.
Are there another languages whose libraries are that strong to perform above task(handle OS dialogs).. | Which language is best for automating installing installers on windows and linux machines? | 0 | 0 | 0 | 118 |
40,788,494 | 2016-11-24T14:09:00.000 | 1 | 0 | 0 | 0 | python,algorithm,parsing,tree,nlp | 40,793,242 | 3 | false | 0 | 0 | Assigning weights is a million dollar question there. As a first step, I would identify parts of the sentence (subject-predicate-clause, etc) and the sentence structure (simple-compound-complex, etc) to find "anchor" words that would have highest weight. That should make the rest of the task easier. | 1 | 3 | 1 | A friend of mine had an idea to make a speed reading program that displays words one by one (much like currently existing speed reading programs). However, the program would filter out words that aren't completely necessary to the meaning (if you want to skim something).
I have starting to implement this program, but I'm not quite sure on what the algorithm to get rid of "unimportant" words should be.
My idea is to parse the sentence (I'm currently using Stanford Parser) and somehow assign weights based on how important that word is to the sentence's meaning to each word then start removing words with the with the lowest weights. I will continue to do this, check how "different" the original tree and the new tree is. I will continue to remove the word with the lowest weight until the two trees are too different (I will determine some constant via a "calibration" process that each user goes through once). Finally, I will go through each word of the shortened sentence and try to replace it with a simpler or shorter synonym for that word (again while still trying to retain value).
As well, there will be special cases for very common words like "the," "a," and "of."
For example:
"Billy said to Jane, 'Do you want to go out?'"
Would become:
"Billy told Jane 'want go out?'"
This would retain basically all of the meaning of the sentence but shortened it significantly.
Is this a good idea for an algorithm and if so how would I assign the weights, what tree comparison algorithm should I use, and is inserting the synonyms done in a good place (i.e. should it be done before I try to remove any words)? | An Algorithm to Determine How Similar Two Sentences Are | 0.066568 | 0 | 0 | 3,305 |
40,793,649 | 2016-11-24T19:47:00.000 | 4 | 0 | 0 | 0 | python,svg,svgwrite | 41,472,623 | 1 | true | 0 | 0 | svgwrite will only create svg files. svgwrite does not read svg files. If you want to read, modify and then write the svg, the svgwrite package is not the package to use but I do not know of an appropriate package for you.
It might be possible to create svg which uses a reference to another image. That is there is one svg which you then put a second svg on top of the first. I have not done this and do not know if it would actually work. | 1 | 9 | 0 | I need to import an existing svg picture and add elements like circle and square on it. My file is 'test.svg' so i tryed dwg = svgwrite.Drawing('test.svg') but it create a new svg file without anything.
I use the python lib svgwrite, do you have any idea for me?
Thank you, and sorry for my english... I do my best! | How to import an existing svg with svgwrite - Python | 1.2 | 0 | 1 | 2,288 |
40,793,882 | 2016-11-24T20:08:00.000 | 3 | 0 | 0 | 0 | python,numpy | 40,800,749 | 4 | false | 0 | 0 | map(x.__getitem__,np.where(mask)[0])
Or if you want list comprehension
[x[i] for i in np.where(mask)[0]]
This keeps you from having to iterate over the whole list, especially if mask is sparse. | 1 | 5 | 1 | Is there a way to index a python list like x = ['a','b','c'] using a numpy boolean array? I'm currently getting the following error: TypeError: only integer arrays with one element can be converted to an index | Index Python List with Numpy Boolean Array | 0.148885 | 0 | 0 | 11,235 |
40,794,180 | 2016-11-24T20:36:00.000 | 1 | 0 | 0 | 0 | javascript,python,file,upload,bokeh | 40,795,462 | 2 | false | 1 | 0 | As far as I know there is no widget native to Bokeh that will allow a file upload.
It would be helpful if you could clarify your current setup a bit more. Are your plots running on a bokeh server or just through a Python script that generates the plots?
Generally though, if you need this to be exposed through a browser you'll probably want something like Flask running a page that lets the user upload a file to a directory which the bokeh script can then read and plot. | 1 | 12 | 1 | I have a Bokeh plotting app, and I need to allow the user to upload a CSV file and modify the plots according to the data in it.
Is it possible to do this with the available widgets of Bokeh?
Thank you very much. | Upload a CSV file and read it in Bokeh Web app | 0.099668 | 0 | 0 | 6,493 |
40,797,767 | 2016-11-25T04:31:00.000 | 0 | 0 | 0 | 0 | python,autohotkey,ctypes | 40,817,329 | 1 | false | 0 | 1 | From the AHK documentation: Send, {Rwin down}
Holds the right Windows key down until {RWin Up} is sent. Down/up can be combined with any key. | 1 | 0 | 0 | I am trying to hold the keyboard keys on windows using python. I could simulate but could not hold the keys. I tried ctypes, AHK, Sendkeys but none of them worked. Is there any way to press and hold the keyboard keys until a release call. Any hint is appreciated.
thanks in advance | Simulate Keyboard Keys on Windows not working | 0 | 0 | 0 | 119 |
40,800,757 | 2016-11-25T08:40:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,lambda | 65,574,478 | 2 | false | 1 | 0 | You can use S3 Event Notifications to trigger the lambda function.
In bucket's properties, create a new event notification for an event type of s3:ObjectCreated:Put and set the destination to a Lambda function.
Then for the lambda function, write a code either in Python or NodeJS (or whatever you like) and parse the received event and send it to Slack webhook URL. | 1 | 3 | 0 | I have a use case where i want to invoke my lambda function whenever a object has been pushed in S3 and then push this notification to slack.
I know this is vague but how can i start doing so ? How can i basically achieve this ? I need to see the structure | How to write a AWS lambda function with S3 and Slack integration | 0 | 0 | 1 | 1,589 |
40,800,848 | 2016-11-25T08:46:00.000 | 0 | 0 | 0 | 0 | python,openerp,odoo-9 | 40,805,427 | 2 | false | 1 | 0 | employee.country_id will return you the object of res.country table with respective record but you need to extend employee.country_id.name to get the character name field from database record. | 1 | 0 | 0 | I am trying to define a salary rule in Odoo 9 Payroll.
The rule condition has to be based on the employee's country. I tried the python expression code below but it does not work.
result = (employee.country_id=="Malaysia") or False
I'm aware that the field type of employee's country (nationality) is many2one with relation of res.country. I just couldn't figure out how it works. | Odoo 9 Salary Rule based on country | 0 | 0 | 0 | 265 |
40,806,743 | 2016-11-25T14:02:00.000 | 0 | 0 | 0 | 0 | javascript,python | 40,806,824 | 4 | false | 1 | 0 | try to use socketio, the backend create an event with data on socketio and your frontend receives the event and download the data
i resolve a similar problem in this way. i call the backend only when a socketio event was create from the backend.
you must setup a socketio server with nodejs,somewhere | 1 | 1 | 0 | Front-end part
I have an AJAX request which is trying to GET data from my back-end handle every second.
If there is any data, I get this data, add it to my HTML page (without reloading), and continue pulling data every second waiting for further changes.
Back-end part
I parse web-pages every minute with Celery.
Extract data from them and pass it to an array (that is a trigger for AJAX request that there is new data).
Question
It seems to me that there is another solution for this issue.
I don't want to ask for data from JS to back-end. I want to pass data from back-end to JS, when there are any changes. But without page reload.
How can I do this? | Push data from backend (python) to JS | 0 | 0 | 1 | 1,931 |
40,807,243 | 2016-11-25T14:32:00.000 | 0 | 1 | 1 | 0 | python,import,attributes | 40,859,783 | 1 | false | 0 | 0 | Problem solved.
The problem was my script was named "pygubu.py"(same as the module i tried to import), after renaming it, everything worked. Thanks to Bryan Oakley for that info. | 1 | 0 | 0 | I try to use the "pygubu" module. I installed it via pip.
I can import it in my script but pygubu.Builder() throws an attribute error.
If I do import it via console and start the script afterwards. There is no error and the script runs.
In the internet I found a wrong pythonpath could be the problem but I checked it, the pythonpath is pointing to the installation folder.
Do you know what could be wrong? | Python attribut error | 0 | 0 | 0 | 98 |
40,812,470 | 2016-11-25T21:27:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,google-cloud-datastore,google-search-api | 40,816,106 | 2 | true | 1 | 0 | There is no first class support for this, your best bet is to make the document id match the datastore key and route all put/get/search requests through a single DAO/repository tier to ensure some level of consistency.
You can use parallel Async writes to keep latency down, but there's not much you can do about search not participating in transactions. It also has no defined consistency, so assume it is eventual, and probably much slower than datastore index propagation. | 2 | 0 | 0 | When a document is stored into both the Cloud datastore and a Search index, is it possible when querying from the Search index, rather than returning the Search index documents, returning each corresponding entity from the Cloud datastore instead? In other words, I essentially want my search query to return what a datastore query would return.
More background: When I create an entity in the datastore, I pass the entity id, name, and description parameters. A search document is built so that its doc id is the same as the corresponding entity id. The goal is to create a front-end search implementation that will utilize the full-text search api to retrieve all relevant documents based on the text query. However, I want to return all details of that document, which is stored in the datastore entity.
Would the only way to do this be to create a key for each search doc_id returned from the query, and then use get_multi(keys) to retrieve all relevant datastore entities? | Integrating GAE Search API with Datatstore | 1.2 | 0 | 0 | 68 |
40,812,470 | 2016-11-25T21:27:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,google-cloud-datastore,google-search-api | 40,820,702 | 2 | false | 1 | 0 | You can store any information that you need in the Search API documents, in addition to their text content.
This will allow you to retrieve all data in one call at the expense of, possibly, storing some duplicate information both in the Search API documents and in the Datastore entities. Obviously, having duplicate data is not ideal, but it may be a good option for rarely changing data (e.g. document timestamp, author ID, title, etc.) as it can offer a significant performance boost. | 2 | 0 | 0 | When a document is stored into both the Cloud datastore and a Search index, is it possible when querying from the Search index, rather than returning the Search index documents, returning each corresponding entity from the Cloud datastore instead? In other words, I essentially want my search query to return what a datastore query would return.
More background: When I create an entity in the datastore, I pass the entity id, name, and description parameters. A search document is built so that its doc id is the same as the corresponding entity id. The goal is to create a front-end search implementation that will utilize the full-text search api to retrieve all relevant documents based on the text query. However, I want to return all details of that document, which is stored in the datastore entity.
Would the only way to do this be to create a key for each search doc_id returned from the query, and then use get_multi(keys) to retrieve all relevant datastore entities? | Integrating GAE Search API with Datatstore | 0 | 0 | 0 | 68 |
40,813,248 | 2016-11-25T23:06:00.000 | 1 | 0 | 1 | 0 | ipython,visual-studio-code,jupyter | 40,857,069 | 1 | true | 0 | 0 | Looks like the issue was that I didn't have the folder containing the python file open in VS Code, and so I was running into permissions errors when VS Code tried to write Jupyter output. | 1 | 2 | 0 | I have Jupyter installed on my computer and have used it to run notebooks in my browser. Also, I have the Python plugin installed in VSCode. However, if I'm editing a Python file and I bring up the command palette and select Jupyter: Run Selection/Line, nothing happens, and I'm not shown a results window.
I'm on a Mac, if it makes a difference. | Cannot use Jupyter (IPython) in Visual Studio Code | 1.2 | 0 | 0 | 4,045 |
40,813,514 | 2016-11-25T23:41:00.000 | 0 | 0 | 0 | 0 | python,django,postgresql,neo4j,redis | 40,814,518 | 1 | false | 1 | 0 | Neo4J is a graph database, which is good for multiple-hops relation search. Say, you want to get the top N posts of A's brother's friend's sister... AFAIK, it's a standalone instance, you CANNOT partition your data on several nodes. Otherwise, the relation between two person might cross machines.
Redis is a key-value store, which is good for searching by key. Say you want to get the friend list of A, or get the top N list of A. You can have a Redis cluster to distribute your data on several machines.
Which is better? It depends on your scenario. It seems that you don't need multiple-hops relation search. So Redis might be better.
You can have a SET to save the friend list of each person. and have a LIST to save the post ids of each person. When you need show posts for user A, call SMEMBERS or SSCAN to get the friend list, and then call LRANGE for each friend to get the top N post ids. | 1 | 0 | 0 | I am building a social network where each user has 3 different profiles - Profile 1, Profile 2 and Profile 3.
This is my use case:
User A follows Users B, C and D in Profile 1. User A follows Users C, F and G in Profile 2. User C follows Users A and E in profile 3.
Another question is that any user on each of these profiles would need to see the latest or (say top N) posts of the users they are following on their respective profiles (whether it is profile 1, 2 or 3).
How can we best store the above information?
Context:
I am c using Django framework and a Postgres DB to store user’s profile information. User’s posts are being stored on and retrieved from a Cloud CDN.
Which is the best way to go implementing these use cases i.e. the choice of technologies to best suit this scenario? Scalability is an other important factor that comes into play here. | Should I use Redis or Neo4J for the following use case? | 0 | 1 | 0 | 169 |
40,813,545 | 2016-11-25T23:46:00.000 | 0 | 0 | 1 | 0 | python-3.x,terminal,pycharm | 40,814,965 | 1 | false | 0 | 0 | Pycharm > Settings > Project Interpreter | 1 | 2 | 0 | In pycharm's built in terminal how can I specify which version of python I want to run? With my standard terminal I can type either 'python' or 'python3' to run 2.7 or 3.5 respectively however when I try this in the pycharm terminal I can still run 2.7 with the 'python' prompt, but the python3 prompt results in a "command not found" error. I'm really new to python, command lines and programming in general so any help would be greatly appreciated. | Python version specification in pycharm terminal | 0 | 0 | 0 | 533 |
40,815,079 | 2016-11-26T04:57:00.000 | 1 | 1 | 1 | 0 | python,python-2.7,python-3.x,version | 40,815,167 | 2 | false | 0 | 0 | Could you not look at the output from 2to3 to see if any code changes may be necessary ? | 2 | 4 | 0 | Is there any automated way to test that code is compatible with both Python 2 and 3? I've seen plenty of documentation on how to write code that is compatible with both, but nothing on automatically checking. Basically a kind of linting for compatibility between versions rather than syntax/style.
I have thought of either running tests with both interpreters or running a tool like six or 2to3 and checking that nothing is output; unfortunately, the former requires that you have 100% coverage with your tests and I would assume the latter requires that you have valid Python 2 code and would only pick up issues in compatibility with Python 3.
Is there anything out there that will accomplish this task? | Checking code for compatibility with Python 2 and 3 | 0.099668 | 0 | 0 | 4,817 |
40,815,079 | 2016-11-26T04:57:00.000 | 4 | 1 | 1 | 0 | python,python-2.7,python-3.x,version | 40,815,143 | 2 | true | 0 | 0 | There is no "fool-proof" way of doing this other than running the code on both versions and finding inconsistencies. With that said, CPython2.7 has a -3 flag which (according to the man page) says:
Warn about Python 3.x incompatibilities that 2to3 cannot trivially fix.
As for the case where you have valid python3 code and you want to backport it to python2.x -- You likely don't actually want to do this. python3.x is the future. This is likely to be a very painful problem in the general case. A lot of the reason to start using python3.x is because then you gain access to all sorts of cool new features. Trying to re-write code that is already relying on cool new features is frequently going to be very difficult. Your much better off trying to upgrade python2.x packages to work on python3.x than doing things the other way around. | 2 | 4 | 0 | Is there any automated way to test that code is compatible with both Python 2 and 3? I've seen plenty of documentation on how to write code that is compatible with both, but nothing on automatically checking. Basically a kind of linting for compatibility between versions rather than syntax/style.
I have thought of either running tests with both interpreters or running a tool like six or 2to3 and checking that nothing is output; unfortunately, the former requires that you have 100% coverage with your tests and I would assume the latter requires that you have valid Python 2 code and would only pick up issues in compatibility with Python 3.
Is there anything out there that will accomplish this task? | Checking code for compatibility with Python 2 and 3 | 1.2 | 0 | 0 | 4,817 |
40,817,287 | 2016-11-26T10:21:00.000 | 0 | 0 | 1 | 0 | python-2.7,python-3.x | 40,817,988 | 1 | false | 0 | 0 | The answers to the question @pete23 mentions are good.
If you're dealing with time values (i.e., float values less than 1.0), just multiply by 86400 to get the number of seconds. | 1 | 0 | 0 | i have the one cell with 00:00:44 seconds in excel i want it to print as it is
but i am getting like float number
this is what i printed on dictionary 'Avg. Visitor Response Time': 0.0005092592592592592,
these are time responses in my excel sheet every time i read , i am getting only floating numbers
0:00:36
0:00:36
0:00:33
0:00:32
0:00:36
0:00:39
0:00:44
0:00:40
0:00:46
0:00:42
0:00:43
0:00:43
0:00:41
0:00:42
can you please help me | Reading seconds from excel sheet using python | 0 | 0 | 0 | 23 |
40,819,939 | 2016-11-26T15:31:00.000 | 2 | 0 | 0 | 0 | python,scikit-learn,keras,cross-validation | 44,195,108 | 2 | false | 0 | 0 | Nope... That seem to be the solution. (Of what I know of.) | 1 | 4 | 1 | I am currently trying train a regression network using keras. To ensure I proper training I've want to train using crossvalidation.
The Problem is that it seems that keras don't have any functions supporting crossvalidation or do they?
The only solution I seemed to have found is to use scikit test_train_split and run a model.fit for for each k fold manually. Isn't there a already an integrated solutions for this, rather than manually doing it ? | Keras and cross validation | 0.197375 | 0 | 0 | 4,491 |
40,821,099 | 2016-11-26T17:33:00.000 | 0 | 0 | 0 | 0 | python,arrays,numpy | 40,821,227 | 2 | false | 0 | 0 | Assuming your array a is 1D and its length is a multiple a 500, a simple np.sum(a.reshape((-1, 500)) ** 2, axis=1) would suffice. If you want a more complicated operation, please edit your question accordingly. | 1 | 1 | 1 | I want to take a part of values (say 500 values) of an array and perform some operation on it such as take sum of squares of those 500 values. and then proceed with the next 500 values of the same array.
How should I implement this? Would a blackman window be useful in this case or is another approach more suitable? | How to use Blackman Window in numpy to take a part of values from an array? | 0 | 0 | 0 | 410 |
40,822,073 | 2016-11-26T19:10:00.000 | 0 | 0 | 0 | 0 | python-3.x,windows-server-2012-r2,win32com | 40,835,895 | 1 | false | 0 | 0 | You likely installed incompatible versions like Python 32bit but win32com 64bit. | 1 | 0 | 0 | So, I am trying to import win32com.client and when I am running the script in Windows Server 2012 using python 3.5 I get the next error :
import win32api, sys, os
ImportError: DLL load failed: The specified module could not be found.
I've tried the next things:
-Copied the pywintypes35.dll and pythoncom35.dll to Python35\Lib\site-packages\win32 and win32com
-Run the Python35\Scripts\pywin32_postinstall.py
-Copied the file from step 1 into an virtualenv
None of this seems to work. It is a problem with python 3.5 in Windows Server 2012? | python win32com importing issue windows server 2012 r2 | 0 | 0 | 1 | 589 |
40,822,563 | 2016-11-26T20:01:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn | 41,367,703 | 2 | false | 0 | 0 | This is the default behavior since fit_intercept defaults to True | 1 | 3 | 1 | I have been told that when dealing with multivariate logistic regression, you want to add a column of 1's in the X matrix for the model intercept. When using the model from sklearn, does the module automatically add the column of 1? | Does Sklearn.LogisticRegression add a column of 1? | 0 | 0 | 0 | 2,238 |
40,822,654 | 2016-11-26T20:11:00.000 | 0 | 0 | 1 | 0 | java,python,arrays,algorithm | 40,825,538 | 2 | false | 0 | 0 | You can try using rectilinear transformation of coordination compression. Visualize it as a 3d space and then combine three points to get to the target color.
Hope that helps! | 1 | 2 | 0 | Given a list of colors(RGB values) L and a color C, determine if we can mix 2 or more colors from the list L to obtain C. The colors from the list can be mixed in any proportion. | Determine if we can obtain a target color by mixing colors from a list | 0 | 0 | 0 | 112 |
40,822,677 | 2016-11-26T20:13:00.000 | 0 | 0 | 1 | 0 | python | 40,822,856 | 1 | false | 0 | 0 | Since you give no attempted code, I'll just give you an idea of the algorithm.
Go through the list, skipping index 0 where the pivot value is and starting at index 1, and examine the list entries. If the entry is less than or equal to the pivot value, "swap list entries to move that value to the lower end of the table." If you do the coding right, this will not be needed and you can just leave the value in place. If the value is greater than the pivot value, swap list entries to move the examined value to the upper end of the table. You stop when you have examined all the list entries. If you really need the pivot value between the two sublists, make one last swap to put the pivot value in its proper place.
You need two variables to keep track of the limits of the examined lower part of the list and the examined upper part of the list. The unexamined values will be between these two sublists. These variables are indices into the list--much of your code will deal with using and updating these indices as you progress through the table's values.
Now think about the exact definition of those two key index variables, their initial values, how they change for each examined value, and just when you will stop the loop that examines the list's values.
Note that the algorithm I suggest is not the best way to do the partition. There is another, more complex algorithm that reduces the number of swaps. But you should learn to crawl before you walk, and my suggestion does get the job done with the same order of execution time. | 1 | 0 | 0 | I need to organize a list. The item in the zero index will be the pivot, so that every item in the list smaller will be put left to it in the list, and everything greater will be put right to it in the list.
Now I can't use "sort" or any built-in function in Python. Anyone could give me a clue? | Organizing the list with a pivot | 0 | 0 | 0 | 59 |
40,823,017 | 2016-11-26T20:50:00.000 | 1 | 1 | 1 | 0 | multithreading,caching,concurrency,python-multithreading | 41,227,442 | 1 | true | 0 | 0 | You can solve this issue with locking... create a Cleaner thread that sometimes locks the cache (take all the locks that you have on it) and clears it, then release the locks...
(if you have many locks for different part of the cache, you can also lock & clear the cache piece by piece...) | 1 | 0 | 0 | I have a cache object designed to hold variables shared across modules.
Concurrent threads are accessing & writing to that cache object using a cache handler that uses variable locking to keep the cache integrity.
The cache holds all variables under a unique session id. When a new session id is created, it has to reference the old one, copying some of the variables into a new session.
Some session ids will be running concurrently too.
I need to clear out the cache as soon as all concurrent threads are done referencing it, and all new sessions have copied the variables from it into their session.
The problem...
I don't know when it's safe to clear the cache.
I have hundreds of threads making API calls of variable time. New sessions will spawn new threads. I can't just look at active threads to determine when to clear the cache.
My cache will grow to infinite size & eventually crash the program.
I believe this must be a common problem. And those far smarter then me have through I through.
Any ideas how to best solve this? | Cache growing infinitely large because of concurrent threads | 1.2 | 0 | 0 | 56 |
40,823,341 | 2016-11-26T21:27:00.000 | 3 | 0 | 0 | 0 | python,neural-network,tensorflow,recurrent-neural-network,lstm | 40,829,189 | 1 | true | 0 | 0 | I don't think you would want to do that since each example in the batch should be independent. If they are not you should just have a batch size of 1 and a sequence of length 100 * batch_size. Often you might want to save the state in between batches. In this case you would need to save the RNN state to a variable or as I like to do allow the user to feed it in with a placeholder. | 1 | 1 | 1 | I am trying to implement RNN in Tensorflow for text prediction. I am using BasicLSTMCell for this purpose, with sequence lengths of 100.
If I understand correctly, the output activation h_t and the activation c_t of the LSTM are reset each time we enter a new sequence (that is, they are updated 100 times along the sequence, but once we move to the next sequence in the batch, they are reset to 0).
Is there a way to prevent this from happening using Tensorflow? That is, to continue using the updated c_t and h_t along all the sequences in the batch? (and then to reset them when moving to the next batch). | resetting c_t and h_t in LSTM every couple of sequences in RNN | 1.2 | 0 | 0 | 228 |
40,823,516 | 2016-11-26T21:49:00.000 | 13 | 0 | 0 | 0 | python,scrapy,web-crawler,robots.txt,scrapy-shell | 40,823,612 | 2 | true | 1 | 0 | In the settings.py file of your scrapy project, look for ROBOTSTXT_OBEY and set it to False. | 1 | 10 | 0 | I use Scrapy shell without problems with several websites, but I find problems when the robots (robots.txt) does not allow access to a site.
How can I disable robots detection by Scrapy (ignored the existence)?
Thank you in advance.
I'm not talking about the project created by Scrapy, but Scrapy shell command: scrapy shell 'www.example.com' | How to disable robots.txt when you launch scrapy shell? | 1.2 | 0 | 0 | 10,247 |
40,823,881 | 2016-11-26T22:40:00.000 | 1 | 0 | 0 | 0 | python,django,celery | 40,824,208 | 1 | true | 1 | 0 | With Celery better in the task function | 1 | 0 | 0 | I have a twitter API used in my Django project.
Right now I keep initialization in my settings.py and in tasks.py I just import API from django.conf.settings. I am not sure if it's a good practice as I've never seen it. Do I need to create API instance somewhere in celery.py or even in the task function? | What is the right place to initialize third-party API in Django project for later Celery use | 1.2 | 0 | 0 | 117 |
40,824,041 | 2016-11-26T23:02:00.000 | 1 | 0 | 1 | 0 | python,pydev,eclipse-mars | 40,825,598 | 3 | false | 0 | 0 | First it is possible. Second, off top of my head it might be you need to change in preferences where the debug perspective points to? Look through pydev preferences too. Sorry not to be more helpful. Away from computer. | 2 | 6 | 0 | As my problem is quite simple I'll try to make this question simple aswell. The problem I'm having concerns the PyDev interactive console. I can bring up the console just fine without problems, and even use it as an interactive shell, just as I would be able to with IDLE. However, when I try to run my code that I've written in my project module, it won't run to the interactive console, but to the Eclipse default console. The default console displays the program just fine, but since it's not an interactive shell, I can't do anything afterwards.
With that being said, my question is: How do I get my code to run to the PyDev interactive console, and not the Eclipse default one? Thanks in advance! | PyDev: Running code to interactive console | 0.066568 | 0 | 0 | 1,226 |
40,824,041 | 2016-11-26T23:02:00.000 | 0 | 0 | 1 | 0 | python,pydev,eclipse-mars | 41,070,542 | 3 | false | 0 | 0 | Run the program from the code window. Try hitting F9 whike cyrsor in your code window. Results and bugs should show up in console. | 2 | 6 | 0 | As my problem is quite simple I'll try to make this question simple aswell. The problem I'm having concerns the PyDev interactive console. I can bring up the console just fine without problems, and even use it as an interactive shell, just as I would be able to with IDLE. However, when I try to run my code that I've written in my project module, it won't run to the interactive console, but to the Eclipse default console. The default console displays the program just fine, but since it's not an interactive shell, I can't do anything afterwards.
With that being said, my question is: How do I get my code to run to the PyDev interactive console, and not the Eclipse default one? Thanks in advance! | PyDev: Running code to interactive console | 0 | 0 | 0 | 1,226 |
40,827,922 | 2016-11-27T10:31:00.000 | 0 | 0 | 1 | 0 | python-3.x,pillow | 44,995,301 | 5 | false | 0 | 0 | Use pip install Pillow. It worked for me | 1 | 2 | 0 | As the title suggested I have trouble installing Pillow on Python 3.5.2. I believe I already have pip and easyinstall preinstalled. I have viewed some tutorials and attempted to type '>$pip install Pillow' into the IDLE shell but it came back with a syntax error. I am new to programming and have no prior IT exprience so please be specific and put it in simple terms. My OS is Windows 10. Thanks. | How to install Pillow on Python 3.5? | 0 | 0 | 0 | 10,110 |
40,829,181 | 2016-11-27T12:54:00.000 | 0 | 0 | 1 | 1 | python,cmd,interpreter | 40,829,247 | 3 | true | 0 | 0 | The simplest way would be to just do the following in cmd:
C:\path\to\file\test.py
Windows recognizes the file extension and runs it with Python.
Or you can change the directory to where the Python program/script is by using the cd command in the command prompt:
cd C:\path\to\file
Start Python in the terminal and import the script using the import function:
import test
You do not have to specify the .py file extension. The script will only run once per process so you'll need to use the reload function to run it after it's first import.
You can make python run the script from a specific directory:
python C:\path\to\file\test.py | 2 | 0 | 0 | I have Python 3.6 and Windows 7.
I am able to successfully start the python interpreter in interactive mode, which I have confirmed by going to cmd, and typing in python, so my computer knows how to find the interpreter.
I am confused however, as to how to access files from the interpreter. For example, I have a file called test.py (yes, I made sure the correct file extension was used).
However, I do not know how to access test.py from the interpreter. Let us say for the sake of argument that the test.py file has been stored in C:\ How then would I access test.py from the interpreter? | Running a python program in Windows? | 1.2 | 0 | 0 | 1,359 |
40,829,181 | 2016-11-27T12:54:00.000 | 0 | 0 | 1 | 1 | python,cmd,interpreter | 40,829,271 | 3 | false | 0 | 0 | In command prompt you need to navigate to the file location. In your case it is in C:\ drive, so type:
cd C:\
and then proceed to run your program:
python test.py
or you could do it in one line:
python C:\test.py | 2 | 0 | 0 | I have Python 3.6 and Windows 7.
I am able to successfully start the python interpreter in interactive mode, which I have confirmed by going to cmd, and typing in python, so my computer knows how to find the interpreter.
I am confused however, as to how to access files from the interpreter. For example, I have a file called test.py (yes, I made sure the correct file extension was used).
However, I do not know how to access test.py from the interpreter. Let us say for the sake of argument that the test.py file has been stored in C:\ How then would I access test.py from the interpreter? | Running a python program in Windows? | 0 | 0 | 0 | 1,359 |
40,829,645 | 2016-11-27T13:44:00.000 | 0 | 0 | 1 | 1 | python,pip,splinter | 53,305,576 | 3 | false | 0 | 0 | I had the same issue, I uninstalled and reinstalled splinter many times but that didn't work. Then I typed source activate (name of my conda environment) and then did pip install splinter. It worked for me. | 1 | 0 | 0 | This question has been asked a few times, but the remedy appears to complicated enough that I'm still searching for a user specific solution. I recently re-installed anaconda; now, after entering
"pip install splinter"
in the Terminal on my Mac I get the response:
"Requirement already satisfied: splinter in /usr/local/lib/python2.7/site-packages
Requirement already satisfied: selenium>=2.53.6 in /usr/local/lib/python2.7/site-packages (from splinter)"
But, I get the following error in python (Anaconda) after entering import splinter
Traceback (most recent call last):
File "", line 1, in
import splinter
ImportError: No module named splinter"
When I enter which python in the terminal, this is the output: "/usr/local/bin/python"
I am editing the question here to add the solution: ~/anaconda2/bin/pip install splinter | Module not found in python after installing in terminal | 0 | 0 | 1 | 5,785 |
40,830,517 | 2016-11-27T15:21:00.000 | 2 | 0 | 0 | 0 | android,python,kivy,crash-reports | 40,833,444 | 1 | true | 0 | 1 | Finally figured out how to use adb logcat. Install android studio 2.2. Connect your device to the PC via USB and enable debugging mode in the developer options. cd in command prompt to C:\Users[user]\AppData\Local\Android\sdk\platform-tools, and run adb devices. If the serial number of your android device appears, you're good to go. Then run adb logcat while executing the kivy app in your phone, and you'll get the realtime logs. | 1 | 1 | 0 | When running my kivy app on Android Kivy Launcher it crashes instantly. I've looked everywhere for the logs, but couldn't find them. No .kivy folder is created, apps that view android logcat require root access and I couldn't get working adb logcat. Can someone explain to me how to use adb logcat to catch the error, or state another solution to my problem?
PS: The app uses kivy 1.9.1 and py 3.4.4, runs fine on windows and my cellphone is a Xperia Z5 running Android Marshmallow 6.0.1 | Cant find kivy logs when running with Kivy Launcher | 1.2 | 0 | 0 | 1,259 |
40,832,168 | 2016-11-27T18:08:00.000 | 0 | 0 | 0 | 1 | python,setup.py | 40,848,259 | 2 | true | 0 | 0 | Turned out the "correct" way to do it was pretty straight forward and I just missed it when looking in the documentation:
Use setup.cfg.
It's a standard config-file, where you can define a section for each build-target / kind of distribution (sdist, bdist_wheel, bdist_wininst, etc.), which contains the command line options you want to give to setup.py when building it. | 1 | 1 | 0 | When I run setup.py, I generally want to add different command line options to the call, based on which kind of distribution I'm building.
For example I want to add --user-access-control force if I build a windows installer (bdist_wininst).
Another example would be omitting the call to a post-install-script when building a source distribution.
My current solution would be to create small .bat and .sh scripts with the desired call to setup.py, but that feels somehow wrong.
Is there a better way to do what I want, or are my instincts failing me?
Edit: Found the correct way. See my answer below. | Automatically add command line options to setup.py based on target | 1.2 | 0 | 0 | 73 |
40,836,932 | 2016-11-28T03:54:00.000 | 2 | 0 | 0 | 0 | python,pandas,dataframe | 40,836,970 | 1 | false | 0 | 0 | It depends on your application. The one "best practice" I know of for Pandas indexes is to keep their values unique. As for which column to use as your index, that depends on what you will later do with the DataFrame (e.g. what other data you might merge it with). | 1 | 0 | 1 | I'm relatively new to Pandas and I wanted to know best practices around incremental vs. primary key indexes.
In particular, I'm wondering:
what are the benefits of using a dataset's primary key as its DataFrame index, as opposed to just sticking with the default incremental integer index?
are any potential pitfalls for replacing the default incremental integer index with a dataset's primary key? | Using primary key column as index values in Pandas dataframe - Best practice? | 0.379949 | 0 | 0 | 1,018 |
40,837,982 | 2016-11-28T05:59:00.000 | 0 | 0 | 0 | 0 | python,hbase,happybase | 41,020,057 | 1 | false | 0 | 0 | the hbase data model does not offer that efficiently without encoding the timestamp as a prefix of the row key, or using a second table as a secondary index. read up on hbase schema design, your use case is likely covered in sections on time series data. | 1 | 0 | 0 | I'm trying to get the items (and their count) that were inserted in the last 24 hours using happybase. All I can think of is to use the timestamp to do that, but I couldn't figure out how to do that.
I can connect to hbase | How to scan between two timestamps range using happybase? | 0 | 0 | 0 | 862 |
40,839,281 | 2016-11-28T07:41:00.000 | 2 | 0 | 1 | 0 | python,list,dictionary,tuples | 40,839,365 | 3 | false | 0 | 0 | print max(a.items(), key=lambda (key, val): sum(val[0][1:]))[0] # dog | 1 | 1 | 0 | For example, I have:
>>>a = {'dog': [('cute', 10, 20)], 'cat': [('nice', 12, 11)], 'fish':[('hate', 1, 3)]}
Expect to return 'dog' because the sum of the integers at index 1 and 2 is 30, which is greater than the sum of integers for cat and fish.
If I don't use import, is there a easy way to do that? | How to compare a integer in tuple in a list, to another integer in another tuple in another list in one dictionary? | 0.132549 | 0 | 0 | 74 |
40,839,757 | 2016-11-28T08:16:00.000 | 0 | 0 | 0 | 0 | python,django,multithreading,python-3.x,http | 40,839,961 | 2 | false | 1 | 0 | It depends on how you deploy Django app. See Gunicorn or Uwsgi. Usually, there is a pool of processes.
Maybe db transaction could help you. | 1 | 0 | 0 | I am using django 1.10.2 with python 3.5.2 in a Linux machine.
I have 2 questions that are related:
What is spawn when a client connect to django? Is it a new thread for every client or a new process for every client?
I need to have a method in django that must only be accessed by the client one at a time. Basically this must be a thread safe method with perhaps a lock mechanism. How do I accomplish this in django.
Thanks in advance! | How to create a thread safe method in django | 0 | 0 | 0 | 555 |
40,840,480 | 2016-11-28T09:03:00.000 | -1 | 0 | 1 | 1 | python,python-3.x,ubuntu | 40,840,607 | 2 | false | 0 | 0 | I think you have to build it yourself from source... You can easily find a guide for that if you google it. | 1 | 2 | 0 | I would like to install my own package in my local dir (I do not have the root privilege). How to install python3-dev locally?
I am using ubuntu 16.04. | python3: how to install python3-dev locally? | -0.099668 | 0 | 0 | 2,282 |
40,840,731 | 2016-11-28T09:18:00.000 | 4 | 0 | 0 | 0 | python,gensim,lda,topic-modeling | 40,842,347 | 1 | true | 0 | 0 | Finally figured it out. The issue with small documents is that if you try to filter the extremes from dictionary, you might end up with empty lists in corpus.corpus = [dictionary.doc2bow(text)].
So the values of parameters in dictionary.filter_extremes(no_below=2, no_above=0.1) needs to be selected accordingly and carefully before corpus = [dictionary.doc2bow(text)]
I just removed the filter extremes and lda model runs fine now. Though I will change the parameter values in filter extreme and use it later. | 1 | 2 | 1 | Getting this error in python when trying to compute lda for a smaller size of corpus but works fine in other cases.
The size of corpus is 15 and I tried setting the number of topic to 5 then reduced it to 2 but it still gives the same error : ValueError: cannot compute LDA over an empty collection (no terms)
getting error at this line : lda = models.LdaModel(corpus, num_topics=topic_number, id2word=dictionary, passes=passes)
where corpus is corpus = [dictionary.doc2bow(text) for a, id, text, s_date, e_date, qd, qd_perc in texts]
Why is it giving no terms? | ValueError: cannot compute LDA over an empty collection (no terms) | 1.2 | 0 | 0 | 5,310 |
40,840,738 | 2016-11-28T09:18:00.000 | 0 | 0 | 1 | 1 | python,macos,opencv | 40,842,338 | 1 | false | 0 | 0 | copy cv2.so and cv.py to /System/Library/Frameworks/Python.framework/Versions/2.7/lib/
You can find this two files in /usr/local/Cellar/opecv/../lib/python2.7 | 1 | 0 | 1 | Today I was installing the opencv by
homebrew install opencv
and then I try to import it by:
python
import cv2
and it return: No moudule named cv2
However, I try to import it by:
Python3
Import cv2
it works well.
I tried to install opencv again but homebrew said it has been installed.
Dont know what can I do now | Config the opencv in python 2.7 of MacOS | 0 | 0 | 0 | 30 |
40,841,872 | 2016-11-28T10:20:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 47,368,406 | 1 | false | 0 | 0 | OP self-solved problem by upgrading to the docker tag 0.10.0-devel-gpu (and it should work going forward). | 1 | 0 | 1 | Is it required to install TF Slim externally after pulling a docker image latest-devel-gpu(r0.10) for GPU environment?
when I do import tensorflow.contrib.slim as slim it gives me an error:
"No module named Slim" through python. | is it required to install TF Slim externally after pulling a docker image latest-devel-gpu(r0.10) | 0 | 0 | 0 | 939 |
40,843,794 | 2016-11-28T12:01:00.000 | 2 | 0 | 0 | 0 | python,opencv,image-processing | 40,844,025 | 2 | false | 0 | 0 | The state of the art for such object recognition tasks are convolutional neural networks, but you will need a large labelled training set, which might rule that out. Otherwise SIFT/SURF is probably what you are looking for. They are pretty robust towards most transformations. | 1 | 0 | 1 | I have multiple images of same object taken at different angles and has many such objects. I need to match a test image which is taken at a random angle later belongs to particular object with similar background, by matching it with those images. The objects are light installations inside a building. Same object may be installed at different places, but backgrounds are different.
I used mean shift error, template matching from opencv and Structural Similarity Index, but with less accuracy.
How about Image Fingerprinting or SIFT/SURF | How to match images taken at different angles | 0.197375 | 0 | 0 | 1,892 |
40,847,099 | 2016-11-28T14:49:00.000 | 1 | 0 | 0 | 0 | python,django,django-models | 40,847,290 | 1 | true | 1 | 0 | You can check the django_migrations table in your database to see what migrations are applied and delete the other ones from yourapp/migrations | 1 | 0 | 0 | I'm developping a website in django but yesterday i did bad thing with my models.
I ran the "makemigrations" command but when I tried to do a "migrate" command, it did not work. So, I would like to cancel all my "makemigrations" that are not "migrate".
Is that possible ??
Thanks! | Cancel makemigrations Django | 1.2 | 0 | 0 | 1,952 |
40,847,935 | 2016-11-28T15:29:00.000 | 0 | 0 | 1 | 0 | ipython,ipython-notebook | 65,730,461 | 1 | false | 1 | 0 | I was able to upload 6 GB file also in the later version of the Anaconda Distribution, I think the same was fixed in the later versions.
I am currently using conda 4.5.11. | 1 | 3 | 0 | I am trying to upload a weblog file which is of size 500MB in my IPython notebook. But I get the error "Cannot upload the file >25Mb".
Is there a way I can overcome this error. Any help would be appreciated.
Thanks. | Cannot upload huge file in IPython notebook | 0 | 0 | 0 | 1,817 |
40,851,318 | 2016-11-28T18:44:00.000 | 0 | 1 | 1 | 0 | python,string,hardcoded | 40,851,410 | 4 | false | 0 | 0 | You could do any of these: 1) read the string from a file, 2) read the string from a database, 3) pass the string as a command line argument | 4 | 0 | 0 | I'm wondering how it's possible to not hard-code a string, for example, an IP in some software, or in remote connection software.
I heared about a Cyber Security Expert which found the IP of someone who was hacking people in hes software, and with that they tracked him.
So, how can I avoid hardcoding my IP in to the software, and how do people find my IP if I do hardcode it? | How to not hardcode a String in a software? | 0 | 0 | 0 | 933 |
40,851,318 | 2016-11-28T18:44:00.000 | 0 | 1 | 1 | 0 | python,string,hardcoded | 40,851,444 | 4 | false | 0 | 0 | One option is to use asymmetric encryption, you can request the private string from a server using it (see SSL/TLS).
If you want to do it locally you should write/read to a file (OS should take care of authorization, by means of user access) | 4 | 0 | 0 | I'm wondering how it's possible to not hard-code a string, for example, an IP in some software, or in remote connection software.
I heared about a Cyber Security Expert which found the IP of someone who was hacking people in hes software, and with that they tracked him.
So, how can I avoid hardcoding my IP in to the software, and how do people find my IP if I do hardcode it? | How to not hardcode a String in a software? | 0 | 0 | 0 | 933 |
40,851,318 | 2016-11-28T18:44:00.000 | 0 | 1 | 1 | 0 | python,string,hardcoded | 43,877,881 | 4 | false | 0 | 0 | If you want to hide it, Encrypt it, you see this alot on github where people accidently post their aws keys or what not to github because they didnt use it as a "secret".
always encrypt important data, never use hardcoded values like ip as this can easily be changed, some sort of dns resolver can help to keep you on route.
in this specific case you mentioned using a DNS to point to a proxy will help to mask the endpoint ip. | 4 | 0 | 0 | I'm wondering how it's possible to not hard-code a string, for example, an IP in some software, or in remote connection software.
I heared about a Cyber Security Expert which found the IP of someone who was hacking people in hes software, and with that they tracked him.
So, how can I avoid hardcoding my IP in to the software, and how do people find my IP if I do hardcode it? | How to not hardcode a String in a software? | 0 | 0 | 0 | 933 |
40,851,318 | 2016-11-28T18:44:00.000 | 1 | 1 | 1 | 0 | python,string,hardcoded | 40,852,193 | 4 | true | 0 | 0 | There are several ways to obscure a hard-coded string. One of the simplest is to XOR the bits with another string or a repeated constant. Thus you could have two hardcoded short arrays that, when XORed together, produce the string you want to obscure. | 4 | 0 | 0 | I'm wondering how it's possible to not hard-code a string, for example, an IP in some software, or in remote connection software.
I heared about a Cyber Security Expert which found the IP of someone who was hacking people in hes software, and with that they tracked him.
So, how can I avoid hardcoding my IP in to the software, and how do people find my IP if I do hardcode it? | How to not hardcode a String in a software? | 1.2 | 0 | 0 | 933 |
40,852,458 | 2016-11-28T19:58:00.000 | 2 | 0 | 0 | 0 | python,algorithm,graph | 40,852,542 | 1 | true | 0 | 0 | 4 is the correct one!, because all edges have the same weight, so you need to find the node traversing the minimum quantity of edges.
1 Is wrong because depth-first search doesn't consider edge weights, so any node could be reached first | 1 | 1 | 1 | There is question from one Quiz that I don't fully understand:
Suppose you have a weighted directed graph and want to find a path between nodes A and B with the smallest total weight. Select the most accurate statement:
If some edges have negative weights, depth-first search finds a correct solution.
If all edges have weight 2, depth-first search guarantees that the first path found to be is the shortest path.
If some edges have negative weights, breadth-first search finds a correct solution.
If all edges have weight 2, breadth-first search guarantees that the first path found to be is the shortest path.
Am I right that #1 is correct? | Smallest total weight in weighted directed graph | 1.2 | 0 | 0 | 826 |
40,852,497 | 2016-11-28T20:00:00.000 | 0 | 0 | 0 | 0 | python-2.7,google-bigquery,google-cloud-python | 48,312,211 | 3 | false | 0 | 0 | I don't think this is currently possible. This is why I tend to use the bq cli when I want to load complicated JSON files with many different columns.
Something like this:
bq load --source_format=NEWLINE_DELIMITED_JSON \
[PROJECT_ID]:[DATASET].[TABLE] gs://[BUCKET]/[FILENAME].json \
[PATH TO SCHEMA FOLDER]/schema.json | 1 | 7 | 0 | If I already have the schema file, for example: schema.json. How can I load the file to create the table or job schema using the google-cloud-python API? | Load schema json file to create table or job schema | 0 | 0 | 0 | 2,279 |
40,856,388 | 2016-11-29T01:32:00.000 | 10 | 0 | 1 | 0 | python,types,theory | 40,856,463 | 1 | true | 0 | 0 | The Type Completeness Principle:
No operation should be arbitrarily restricted in the types of values
involved.
- Watt
First-class values can be evaluated, passed as arguments
and used as components of composite values.
Functional languages attempt to make no class distinctions,
whereas imperative languages typically treat functions (at
best) as second-class values.
Pretty much all programming languages limit the kinds of entities
that may be pass as values (and therefore have a meaningful
type). In C or C++, functions are not values, though pointers to
functions are. Classes are not values.
In Java, methods and classes are not values, though you can
obtain a reified object representing a class as a value, and in Java
8, you can pass method references as values. Packages are not
values, however.
In Haskell, functions are first-class values, so can be passed as
arguments and returned as values. Since Haskell is statically typed,
the type system is capable of expressing function types. | 1 | 7 | 0 | In the book Programming Language Design Concepts, it says :
PYTHON counts procedures as first-class values, along with all primitive and composite values. Thus PYTHON conforms well to the Type Completeness Principle.
I still didn't get it. | What is Type Completeness Principle? | 1.2 | 0 | 0 | 1,011 |
40,856,714 | 2016-11-29T02:14:00.000 | 2 | 0 | 0 | 0 | python,web,web2py | 40,870,054 | 1 | true | 1 | 0 | web2py serves static files from the application's /static folder, so just put the files in there. If you need to generate links to them, you can use the URL helper: URL('static', 'path/to/static_file.html') (where the second argument represents the path within the /static folder). | 1 | 0 | 0 | Say I have some comp html files designer gave me and I want to just use it right away in a web2py website running on 127.0.0.1, with web2py MVC structure, how can I achieve that? | How to visit html static site inside web2py project | 1.2 | 0 | 0 | 110 |
40,857,813 | 2016-11-29T04:26:00.000 | 0 | 0 | 0 | 0 | python,web-scraping,beautifulsoup,bs4 | 40,860,108 | 1 | false | 1 | 0 | Try using multi threading or multiprocessing to spawn threads, i think it will spawn a thread for every request and it won't skip over the url if it's taking too long. | 1 | 0 | 0 | I have been trying to get this web scraping script working properly, and am not sure what to try next. Hoping someone here knows what I should do.
I am using BS4 and the problem is whenever a URL takes a long time to load it skips over that URL (leaving an output file with fewer inputs in times of high page load times). I have been trying to add on a timer so that it only skips over the url if it doesn't load in x seconds.
Can anyone point me in the right direction?
Thanks! | Python BS4 Scraping Script Timer | 0 | 0 | 1 | 243 |
40,864,539 | 2016-11-29T11:16:00.000 | 1 | 0 | 0 | 0 | python,openerp,odoo-9 | 40,881,979 | 2 | true | 1 | 0 | if you don't want to use the models in the addons store, You can create a new class inherit from models.Model and overrid the create and write method to save audit in another model and create new Model that inherit the new model not the models.Model class this when ever a create or write is happen it will call the create and write of the parent class not the create | 1 | 1 | 0 | I want to create a new module that save history of a record when someone edit it but doesn't find out any documents regards how to catch an edit action. Does anyone know how to do it ? | How can I "catch" action edit in odoo | 1.2 | 0 | 0 | 299 |
40,866,124 | 2016-11-29T12:37:00.000 | 46 | 0 | 0 | 0 | python,machine-learning,neural-network,deep-learning,keras | 40,870,126 | 2 | true | 0 | 0 | Using Dense(activation=softmax) is computationally equivalent to first add Dense and then add Activation(softmax). However there is one advantage of the second approach - you could retrieve the outputs of the last layer (before activation) out of such defined model. In the first approach - it's impossible. | 1 | 31 | 1 | I was wondering what was the difference between Activation Layer and Dense layer in Keras.
Since Activation Layer seems to be a fully connected layer, and Dense have a parameter to pass an activation function, what is the best practice ?
Let's imagine a fictionnal network like this :
Input -> Dense -> Dropout -> Final Layer
Final Layer should be : Dense(activation=softmax) or Activation(softmax) ?
What is the cleanest and why ?
Thanks everyone! | Difference between Dense and Activation layer in Keras | 1.2 | 0 | 0 | 11,235 |
40,867,474 | 2016-11-29T13:44:00.000 | 5 | 1 | 1 | 0 | python-3.x,pythonanywhere | 40,895,650 | 2 | true | 0 | 0 | PythonAnywhere dev here,
You can use PythonAnywhere to do most of the things you can do on your own computer with Python
start a Python interactive console (from the "Consoles" tab)
edit a python file and run it (from the "Files" tab)
The exception is that, if you want to do things with graphics, like use pygame, that won't work on PythonAnywhere. But most text-based console things will work.
You can also do some more funky things, like host a web application ("Web"), and schedule tasks to run at regular intervals ("Schedule"). If you upgrade to a premium account, you can also run "Jupyter Notebooks", which are popular in the scientific commmunity.
If you need help with anything, drop us a line to [email protected] | 1 | 1 | 0 | I recently opened an account with PythonAnywhere and learnt it is an online IDE and web hosting service but as a beginner in python 3.4, what exactly can i do with it? | What is Python Anywhere used for? | 1.2 | 0 | 0 | 3,662 |
40,871,371 | 2016-11-29T16:52:00.000 | 1 | 0 | 0 | 0 | python,classification,gbm | 40,871,454 | 1 | true | 0 | 0 | You can replace the binary variable with the actual country name then collapse all of these columns into one column. Use LabelEncoder on this column to create a proper integer variable and you should be all set. | 1 | 0 | 1 | I use GradientBoosting classifier to predict gender of users. The data have a lot of predictors and one of them is the country. For each country I have binary column. There are always only one column set to 1 for all country columns. But such desicion is very slow from computation point of view. Is there any way to represent country columns with only one column? I mean correct way. | GradientBoostingClassifier and many columns | 1.2 | 0 | 0 | 69 |
40,871,841 | 2016-11-29T17:15:00.000 | 0 | 1 | 0 | 0 | python,module | 42,648,333 | 1 | false | 0 | 0 | there is 2 type cx_freeze installation package: amd64 or win32
Make sure that what you have is the good version. I firstly installed the version amd64,but apperently my python version is 32bits. So i met the same pb as yours. Try another one, maybe you can success. | 1 | 0 | 0 | I am running a very simple code to test cx_Freeze module but when running appears the error above.
I am using Python 3.5 and the new version (5.0) of cx_Freeze.
CODE:
Calling cx_Freeze:
from cx_Freeze import setup
Executable setup( name = "Prueba", version = "0.1", description = "My application!"
executables = [Executable("pruebas.py")])
pruebas.py: text='Hello' print(text)
Thanks. | Python cx_Freeze error “No module named 'cx_Freeze.util' | 0 | 0 | 0 | 761 |
40,872,683 | 2016-11-29T18:01:00.000 | 0 | 0 | 1 | 0 | python,opencv | 72,478,813 | 4 | false | 0 | 0 | So I was using PyCharm, and what worked for me was to install it directly from file->settings, Project:your-project-name->Python Interpreter list | 1 | 22 | 1 | I am trying to install opencv in python on my windows machine but I am unable to do so. I have python 2.7.11::Anaconda 2.4.1 <32-bit>
Here is what I have tried till now -
pip install cv2 on command line gives the error :
could not find a
version that satisfies the requirement cv2
I downloaded the package from sourceforge site, followed the steps
and pasted cv2.pyd in C:\Python27\Lib\site-packages but still it is
not working. I get the following error message
ImportError: No
module named cv2
(I already have numpy installed and it works just fine). | Unable to install cv2 on windows | 0 | 0 | 0 | 60,085 |
40,877,781 | 2016-11-29T23:37:00.000 | 3 | 0 | 0 | 0 | python,tensorflow | 58,649,705 | 9 | false | 0 | 0 | It is very simple in windows :
Go to : C:\Users\Username\.keras\datasets
and then Delete the Dataset that you want to redownload or has the error | 3 | 14 | 1 | I am getting the following error when I run mnist = input_data.read_data_sets("MNIST_data", one_hot = True).
EOFError: Compressed file ended before the end-of-stream marker was reached
Even when I extract the file manually and place it in the MNIST_data directory, the program is still trying to download the file instead of using the extracted file.
When I extract the file using WinZip which is the manual way, WinZip tells me that the file is corrupt.
How do I solve this problem?
I can't even load the data set now, I still have to debug the program itself. Please help.
I pip installed Tensorflow and so I don't have a Tensorflow example. So I went to GitHub to get the input_data file and saved it in the same directory as my main.py. The error is just regarding the .gz file. The program could not extract it.
runfile('C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py', wdir='C:/Users/Nikhil/Desktop/Tensor Flow')
Reloaded modules: input_data
Extracting MNIST_data/train-images-idx3-ubyte.gz
C:\Users\Nikhil\Anaconda3\lib\gzip.py:274: VisibleDeprecationWarning: converting an array with ndim > 0 to an index will result in an error in the future
return self._buffer.read(size)
Traceback (most recent call last):
File "", line 1, in
runfile('C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py', wdir='C:/Users/Nikhil/Desktop/Tensor Flow')
File "C:\Users\Nikhil\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "C:\Users\Nikhil\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py", line 26, in
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
File "C:\Users\Nikhil\Desktop\Tensor Flow\input_data.py", line 181, in read_data_sets
train_images = extract_images(local_file)
File "C:\Users\Nikhil\Desktop\Tensor Flow\input_data.py", line 60, in extract_images
buf = bytestream.read(rows * cols * num_images)
File "C:\Users\Nikhil\Anaconda3\lib\gzip.py", line 274, in read
return self._buffer.read(size)
File "C:\Users\Nikhil\Anaconda3\lib_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "C:\Users\Nikhil\Anaconda3\lib\gzip.py", line 480, in read
raise EOFError("Compressed file ended before the "
EOFError: Compressed file ended before the end-of-stream marker was reached | EOFError: Compressed file ended before the end-of-stream marker was reached - MNIST data set | 0.066568 | 0 | 0 | 52,865 |
40,877,781 | 2016-11-29T23:37:00.000 | 19 | 0 | 0 | 0 | python,tensorflow | 46,372,714 | 9 | false | 0 | 0 | This is because for some reason you have an incomplete download for the MNIST dataset.
You will have to manually delete the downloaded folder which usually resides in ~/.keras/datasets or any path specified by you relative to this path, in your case MNIST_data.
Perform the following steps in the terminal (ctrl + alt + t):
cd ~/.keras/datasets/
rm -rf "dataset name"
You should be good to go! | 3 | 14 | 1 | I am getting the following error when I run mnist = input_data.read_data_sets("MNIST_data", one_hot = True).
EOFError: Compressed file ended before the end-of-stream marker was reached
Even when I extract the file manually and place it in the MNIST_data directory, the program is still trying to download the file instead of using the extracted file.
When I extract the file using WinZip which is the manual way, WinZip tells me that the file is corrupt.
How do I solve this problem?
I can't even load the data set now, I still have to debug the program itself. Please help.
I pip installed Tensorflow and so I don't have a Tensorflow example. So I went to GitHub to get the input_data file and saved it in the same directory as my main.py. The error is just regarding the .gz file. The program could not extract it.
runfile('C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py', wdir='C:/Users/Nikhil/Desktop/Tensor Flow')
Reloaded modules: input_data
Extracting MNIST_data/train-images-idx3-ubyte.gz
C:\Users\Nikhil\Anaconda3\lib\gzip.py:274: VisibleDeprecationWarning: converting an array with ndim > 0 to an index will result in an error in the future
return self._buffer.read(size)
Traceback (most recent call last):
File "", line 1, in
runfile('C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py', wdir='C:/Users/Nikhil/Desktop/Tensor Flow')
File "C:\Users\Nikhil\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "C:\Users\Nikhil\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py", line 26, in
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
File "C:\Users\Nikhil\Desktop\Tensor Flow\input_data.py", line 181, in read_data_sets
train_images = extract_images(local_file)
File "C:\Users\Nikhil\Desktop\Tensor Flow\input_data.py", line 60, in extract_images
buf = bytestream.read(rows * cols * num_images)
File "C:\Users\Nikhil\Anaconda3\lib\gzip.py", line 274, in read
return self._buffer.read(size)
File "C:\Users\Nikhil\Anaconda3\lib_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "C:\Users\Nikhil\Anaconda3\lib\gzip.py", line 480, in read
raise EOFError("Compressed file ended before the "
EOFError: Compressed file ended before the end-of-stream marker was reached | EOFError: Compressed file ended before the end-of-stream marker was reached - MNIST data set | 1 | 0 | 0 | 52,865 |
40,877,781 | 2016-11-29T23:37:00.000 | 1 | 0 | 0 | 0 | python,tensorflow | 58,121,301 | 9 | false | 0 | 0 | I had the same issue when downloading datasets using torchvision on Windows. I was able to fix this by deleting all files from the following path:
C:\Users\UserName\MNIST\raw | 3 | 14 | 1 | I am getting the following error when I run mnist = input_data.read_data_sets("MNIST_data", one_hot = True).
EOFError: Compressed file ended before the end-of-stream marker was reached
Even when I extract the file manually and place it in the MNIST_data directory, the program is still trying to download the file instead of using the extracted file.
When I extract the file using WinZip which is the manual way, WinZip tells me that the file is corrupt.
How do I solve this problem?
I can't even load the data set now, I still have to debug the program itself. Please help.
I pip installed Tensorflow and so I don't have a Tensorflow example. So I went to GitHub to get the input_data file and saved it in the same directory as my main.py. The error is just regarding the .gz file. The program could not extract it.
runfile('C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py', wdir='C:/Users/Nikhil/Desktop/Tensor Flow')
Reloaded modules: input_data
Extracting MNIST_data/train-images-idx3-ubyte.gz
C:\Users\Nikhil\Anaconda3\lib\gzip.py:274: VisibleDeprecationWarning: converting an array with ndim > 0 to an index will result in an error in the future
return self._buffer.read(size)
Traceback (most recent call last):
File "", line 1, in
runfile('C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py', wdir='C:/Users/Nikhil/Desktop/Tensor Flow')
File "C:\Users\Nikhil\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "C:\Users\Nikhil\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py", line 26, in
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
File "C:\Users\Nikhil\Desktop\Tensor Flow\input_data.py", line 181, in read_data_sets
train_images = extract_images(local_file)
File "C:\Users\Nikhil\Desktop\Tensor Flow\input_data.py", line 60, in extract_images
buf = bytestream.read(rows * cols * num_images)
File "C:\Users\Nikhil\Anaconda3\lib\gzip.py", line 274, in read
return self._buffer.read(size)
File "C:\Users\Nikhil\Anaconda3\lib_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "C:\Users\Nikhil\Anaconda3\lib\gzip.py", line 480, in read
raise EOFError("Compressed file ended before the "
EOFError: Compressed file ended before the end-of-stream marker was reached | EOFError: Compressed file ended before the end-of-stream marker was reached - MNIST data set | 0.022219 | 0 | 0 | 52,865 |
40,878,325 | 2016-11-30T00:36:00.000 | 0 | 0 | 0 | 0 | python,pandas,anaconda | 60,873,472 | 4 | false | 0 | 0 | conda install pip(because conda install mglearn will give a error)
pip install mglearn
grr = pd.plotting.scatter_matrix( ...., cmap=mglearn.cm3)
if you still can't see the output then you might have missed %matplotlib inline | 1 | 1 | 1 | I try to run the following code but it gives the following error in recognistion of mglearn color map.
grr = pd.scatter_matrix( ...., cmap=mglearn.cm3)
ErrorName: name 'mglearn' is not defined
I should add pd is Anaconda Pandas package imported as pd but does not recognize the color map mglearn.cm3
Any suggestions? | what is this color map? cmap=mglearn.cm3 | 0 | 0 | 0 | 5,864 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.