source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
39,042 | How do you use LeakyRelu as an activation function in sequence DNN in keras?
If I want to write something similar to: model = Sequential()
model.add(Dense(90, activation='LeakyRelu')) What is the solution? Put LeakyRelu similar to Relu? Second question is: what are the best general setting for tuning the parameters of LeakyRelu? When is its performance significantly better than Relu? | You can use the LeakyRelu layer , as in the python class, instead of just specifying the string name like in your example. It works similarly to a normal layer. Import the LeakyReLU and instantiate a model from keras.layers import LeakyReLU
model = Sequential()
# here change your line to leave out an activation
model.add(Dense(90))
# now add a ReLU layer explicitly:
model.add(LeakyReLU(alpha=0.05)) Being able to simply write e.g. activation='relu' is made possible because of simple aliases that are created in the source code. For your second question: what are the best general setting for tuning the parameters of LeakyRelu? And when its performance is significantly better than Relu? I can't give you optimal settings for the LeakyReLU, I'm afraid - they will be model/data dependent. The difference between the ReLU and the LeakyReLU is the ability of the latter to retain some degree of the negative values that flow into it, whilst the former simply sets all values less than 0 to be 0. In theory, this extended output range offers a slightly higher flexibility to the model using it. I'm sure the inventors thought it to be useful and perhaps proved that to be the case for a few benchmarks. In practice, however, people generally just stick to the ReLU, as the benefits of the LeakyReLU are not consistent and the ReLU is cheaper to compute and therefore models train slightly faster. | {
"source": [
"https://datascience.stackexchange.com/questions/39042",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/58433/"
]
} |
39,137 | I am trying to build a Regression model and I am looking for a way to check whether there's any correlation between features and target variables? This is my sample dataset Loan_ID Gender Married Dependents Education Self_Employed ApplicantIncome\
0 LP001002 Male No 0 Graduate No 5849
1 LP001003 Male Yes 1 Graduate No 4583
2 LP001005 Male Yes 0 Graduate Yes 3000
3 LP001006 Male Yes 0 Not Graduate No 2583
4 LP001008 Male No 0 Graduate No 6000
CoapplicantIncome LoanAmount Loan_Amount_Term Credit_History Area Loan_Status
0.0 123 360.0 1.0 Urban Y
1508.0 128.0 360.0 1.0 Rural N
0.0 66.0 360.0 1.0 Urban Y
2358.0 120.0 360.0 1.0 Urban Y
0.0 141.0 360.0 1.0 Urban Y I am trying to predict LoanAmount column based on the features available above. I just want to see if there's a correlation between the features and target variable. I tried LinearRegression , GradientBoostingRegressor and I'm hardly getting a accuracy of around 0.30 - 0.40% . Any suggestions on algorithms, params etc that I should use for better prediction? | Your data can be put into a pandas DataFrame using import pandas as pd
data = {'Loan ID': ['LP001002', 'LP001003', 'LP001005', 'LP001006', 'LP001008'],
'Married': ['No', 'Yes', 'Yes', 'Yes', 'No'],
'Dependents': [0, 1, 0, 0, 0],
'Education': ['Graduate', 'Graduate', 'Graduate', 'Not Graduate', 'Graduate'],
'Self_Employed': ['No', 'No', 'Yes', 'No', 'No'],
'Income': [5849, 4583, 3000, 2583, 6000],
'Coapplicant Income': [0, 1508, 0, 2358, 0],
'LoanAmount': [123, 128, 66, 120, 141],
'Area': ['Urban', 'Rural', 'Urban', 'Urban', 'Urban'],
'Loan Status': ['Y', 'N', 'Y', 'Y', 'Y']}
df = pd.DataFrame(data) Now to get a correlation we need to convert our categorical features to numerical ones. Of course the choice of order will affect the correlation but luckily all of our categories seem to be binary. If this is not the case you will need to devise a custom ordering. df = pd.DataFrame(data)
df['Married'] =df['Married'].astype('category').cat.codes
df['Education'] =df['Education'].astype('category').cat.codes
df['Self_Employed'] =df['Self_Employed'].astype('category').cat.codes
df['Area'] =df['Area'].astype('category').cat.codes
df['Loan Status'] =df['Loan Status'].astype('category').cat.codes Now we can get the correlation between the 'LoanAmount' and all the other features. df[df.columns[1:]].corr()['LoanAmount'][:] Now using some machine learning on this data is not likely to work. There just is not sufficient data to extract some relevant information between your large number of features and the loan amount. You need at at least 10 times more instances than features in order to expect to get some good results. To only obtain the correlation between a feature and a subset of the features you can do df[['Income', 'Education', 'LoanAmount']].corr()['LoanAmount'][:] This will take a subset of the DataFrame and then apply the same corr() function as above. Make sure that the subset of columns selected includes the column with which you want to calculate the correlation, in this example that's 'LoanAmount'. | {
"source": [
"https://datascience.stackexchange.com/questions/39137",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/60105/"
]
} |
39,317 | I was going through the official documentation of scikit-learn learn after going through a book on ML and came across the following thing: In the Documentation it is given about sklearn.preprocessing.OrdinalEncoder() whereas in the book it was given about sklearn.preprocessing.LabelEncoder() , when I checked their functionality it looked same to me. Can Someone please tell me the difference between the two please? | Afaik, both have the same functionality. A bit difference is the idea behind. OrdinalEncoder is for converting features, while LabelEncoder is for converting target variable. That's why OrdinalEncoder can fit data that has the shape of (n_samples, n_features) while LabelEncoder can only fit data that has the shape of (n_samples,) (though in the past one used LabelEncoder within the loop to handle what has been becoming the job of OrdinalEncoder now) | {
"source": [
"https://datascience.stackexchange.com/questions/39317",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/59601/"
]
} |
40,089 | I have been doing a classification problem and I have read many people's code and tutorials. One thing I've noticed is that many people take np.log or log of continuous variable like loan_amount or applicant_income etc. I just want to understand the reason behind it. Does it help improve our model prediction accuracy. Is it mandatory? or Is there any logic behind it? Please provide some explanation if possible. Thank you. | This is done when the variables span several orders of magnitude. Income is a typical example: its distribution is "power law", meaning that the vast majority of incomes are small and very few are big. This type of "fat tailed" distribution is studied in logarithmic scale because of the mathematical properties of the logarithm: $$log(x^n)= n log(x)$$ which implies $$log(10^4) = 4 * log(10)$$ and $$log(10^3) = 3 * log(10)$$ which transforms a huge difference $$ 10^4 - 10^3 $$ in a smaller one $$ 4 - 3 $$ Making the values comparable. | {
"source": [
"https://datascience.stackexchange.com/questions/40089",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/61261/"
]
} |
40,130 | System: 1 name node, 4 cores, 16 GB RAM 1 master node, 4 cores, 16 GB RAM 6 data nodes, 4 cores, 16 GB RAM each 6 worker nodes, 4 cores, 16 GB RAM each around 5 Terabytes of storage space The data nodes and worker nodes exist on the same 6 machines and the name node and master node exist on the same machine. In our docker compose, we have 6 GB set for the master, 8 GB set for name node, 6 GB set for the workers, and 8 GB set for the data nodes. I have 2 rdds which I am calculating the cartesian product of, applying a function I wrote to it, and then storing the data in Hadoop as parquet tables. After around 180k parquet tables written to Hadoop, the python worker unexpectedly crashes due to EOFException in Java. conf = SparkConf().setAppName(
"TBG Input Creation App").setMaster("spark://master:7077").setAll(
[('spark.executor.memory', '6g'),
('spark.driver.memory', '4g'),
('spark.executor.heartbeatInterval', '3s'),
('spark.driver.extraJavaOptions', '-XX:+UseG1GC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps'),
('spark.executor.extraJavaOptions', '-XX:+UseG1GC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps')])
rdd_cart = rdd.cartesian(rdd2)
rdd.unpersist()
rdd2.unpersist()
rdd_cart.foreach(lambda row: calc_model(row, fields, vfp_fields)) Then inside the calc_model function, I write out the parquet table. After the crash, I can re-start the run with PySpark filtering out the ones I all ready ran but after a few thousand more, it will crash again with the same EOFException. I am using foreach since I don't care about any returned values and simply just want the tables written to Hadoop. How can identify the root cause of this Py4JJavaError and fix it to prevent constant crashing of the workers? stackoverflow relevant question and answer Job aborted due to stage failure: Task 10 in stage 148.0 failed 4 times, most recent failure: Lost task 10.3 in stage 148.0 (TID 4253, 10.0.5.19, executor 0): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner $ReaderIterator$$anonfun$ 1.applyOrElse(PythonRunner.scala:333)
at org.apache.spark.api.python.BasePythonRunner $ReaderIterator$$anonfun$ 1.applyOrElse(PythonRunner.scala:322)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.spark.api.python.PythonRunner $$anon$1.read(PythonRunner.scala:443)
at org.apache.spark.api.python.PythonRunner$$ anon $1.read(PythonRunner.scala:421)
at org.apache.spark.api.python.BasePythonRunner$ ReaderIterator.hasNext(PythonRunner.scala:252)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator $class.foreach(Iterator.scala:893)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable$ class. $plus$ plus $eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$ plus $plus$ eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer. $plus$ plus $eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$ class.to(TraversableOnce.scala:310)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce $class.toBuffer(TraversableOnce.scala:302)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$ class.toArray(TraversableOnce.scala:289)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD $$anonfun$collect$1$$ anonfun $12.apply(RDD.scala:939)
at org.apache.spark.rdd.RDD$$anonfun$ collect $1$$anonfun$ 12.apply(RDD.scala:939)
at org.apache.spark.SparkContext $$anonfun$runJob$5.apply(SparkContext.scala:2074)
at org.apache.spark.SparkContext$$ anonfun $runJob$ 5.apply(SparkContext.scala:2074)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor $TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$ Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:428)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org $apache$ spark $scheduler$ DAGScheduler $$failJobAndIndependentStages(DAGScheduler.scala:1602)
at org.apache.spark.scheduler.DAGScheduler$$ anonfun $abortStage$ 1.apply(DAGScheduler.scala:1590)
at org.apache.spark.scheduler.DAGScheduler $$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
at org.apache.spark.scheduler.DAGScheduler$$ anonfun $handleTaskSetFailed$ 1.apply(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGScheduler $$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
at org.apache.spark.util.EventLoop$$ anon $1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
at org.apache.spark.rdd.RDD$$anonfun$ collect $1.apply(RDD.scala:939)
at org.apache.spark.rdd.RDDOperationScope$ .withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope $.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
at org.apache.spark.api.python.PythonRDD$ .collectAndServe(PythonRDD.scala:162)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.GeneratedMethodAccessor101.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner $ReaderIterator$$anonfun$ 1.applyOrElse(PythonRunner.scala:333)
at org.apache.spark.api.python.BasePythonRunner $ReaderIterator$$anonfun$ 1.applyOrElse(PythonRunner.scala:322)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.spark.api.python.PythonRunner $$anon$1.read(PythonRunner.scala:443)
at org.apache.spark.api.python.PythonRunner$$ anon $1.read(PythonRunner.scala:421)
at org.apache.spark.api.python.BasePythonRunner$ ReaderIterator.hasNext(PythonRunner.scala:252)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator $class.foreach(Iterator.scala:893)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable$ class. $plus$ plus $eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$ plus $plus$ eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer. $plus$ plus $eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$ class.to(TraversableOnce.scala:310)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce $class.toBuffer(TraversableOnce.scala:302)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$ class.toArray(TraversableOnce.scala:289)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD $$anonfun$collect$1$$ anonfun $12.apply(RDD.scala:939)
at org.apache.spark.rdd.RDD$$anonfun$ collect $1$$anonfun$ 12.apply(RDD.scala:939)
at org.apache.spark.SparkContext $$anonfun$runJob$5.apply(SparkContext.scala:2074)
at org.apache.spark.SparkContext$$ anonfun $runJob$ 5.apply(SparkContext.scala:2074)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor $TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$ Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:428)
... 24 more | This is done when the variables span several orders of magnitude. Income is a typical example: its distribution is "power law", meaning that the vast majority of incomes are small and very few are big. This type of "fat tailed" distribution is studied in logarithmic scale because of the mathematical properties of the logarithm: $$log(x^n)= n log(x)$$ which implies $$log(10^4) = 4 * log(10)$$ and $$log(10^3) = 3 * log(10)$$ which transforms a huge difference $$ 10^4 - 10^3 $$ in a smaller one $$ 4 - 3 $$ Making the values comparable. | {
"source": [
"https://datascience.stackexchange.com/questions/40130",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/24180/"
]
} |
40,714 | Are there any advantages to using log softmax over softmax? What are the reasons to choose one over the other? | There are a number of advantages of using log softmax over softmax including practical reasons like improved numerical performance and gradient optimization . These advantages can be extremely important for implementation especially when training a model can be computationally challenging and expensive. At the heart of using log-softmax over softmax is the use of log probabilities over probabilities, which has nice information theoretic interpretations. When used for classifiers the log-softmax has the effect of heavily penalizing the model when it fails to predict a correct class. Whether or not that penalization works well for solving your problem is open to your testing, so both log-softmax and softmax are worth using. | {
"source": [
"https://datascience.stackexchange.com/questions/40714",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/41591/"
]
} |
41,921 | Which is better for accuracy or are they the same?
Of course, if you use categorical_crossentropy you use one hot encoding, and if you use sparse_categorical_crossentropy you encode as normal integers.
Additionally, when is one better than the other? | Use sparse categorical crossentropy when your classes are mutually exclusive (e.g. when each sample belongs exactly to one class) and categorical crossentropy when one sample can have multiple classes or labels are soft probabilities (like [0.5, 0.3, 0.2]). Formula for categorical crossentropy (S - samples, C - classess, $s \in c $ - sample belongs to class c) is: $$ -\frac{1}{N} \sum_{s\in S} \sum_{c \in C} 1_{s\in c} log {p(s \in c)} $$ For case when classes are exclusive, you don't need to sum over them - for each sample only non-zero value is just $-log p(s \in c)$ for true class c. This allows to conserve time and memory. Consider case of 10000 classes when they are mutually exclusive - just 1 log instead of summing up 10000 for each sample, just one integer instead of 10000 floats. Formula is the same in both cases, so no impact on accuracy should be there. | {
"source": [
"https://datascience.stackexchange.com/questions/41921",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/63516/"
]
} |
41,926 | In order to familiarize myself with semantic segmentation and convolutional neural networks I am going through this tutorial by MathWorks: Semantic Segmentation Using Deep Learning I did not use the pretrained version of Segnet since I wanted to test on my custom data set. All code is the same, however I have different classes, and fewer labels . Below image shows the label name and amount of pixels associated with each. To make up for the low pixel data for class 2, median frequency balancing was performed. imageFreq = tbl.PixelCount ./ tbl.ImagePixelCount
classWeights = median(imageFreq) ./ imageFreq I proceed to train the network using the code provided in the example with the options and lgraph unchanged. The SegNet network is created with weights initialized from the VGG-16 network. Unlike the example, I get a much lower global accuracy: To gain further insight I plotted the Mini-batch accuracy and Mini-batch loss against each iteration. It is clearly seen that the accuracy fluctuates wildly and ends up worse than it started, so the network learned absolutely nothing! However the loss decreased gradually. A possible solution I propose would be to use inverse frequency balancing. However, in the example above, median frequency balancing was already performed, so I doubt how much this would help. Is the terrible performance related to simply not having enough training data? Can anything be be done to improve performance with existing data? Any suggestions are greatly appreciated. | Use sparse categorical crossentropy when your classes are mutually exclusive (e.g. when each sample belongs exactly to one class) and categorical crossentropy when one sample can have multiple classes or labels are soft probabilities (like [0.5, 0.3, 0.2]). Formula for categorical crossentropy (S - samples, C - classess, $s \in c $ - sample belongs to class c) is: $$ -\frac{1}{N} \sum_{s\in S} \sum_{c \in C} 1_{s\in c} log {p(s \in c)} $$ For case when classes are exclusive, you don't need to sum over them - for each sample only non-zero value is just $-log p(s \in c)$ for true class c. This allows to conserve time and memory. Consider case of 10000 classes when they are mutually exclusive - just 1 log instead of summing up 10000 for each sample, just one integer instead of 10000 floats. Formula is the same in both cases, so no impact on accuracy should be there. | {
"source": [
"https://datascience.stackexchange.com/questions/41926",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/49032/"
]
} |
42,599 | I have created three different models using deep learning for multi-class classification and each model gave me a different accuracy and loss value. The results of the testing model as the following: First Model: Accuracy: 98.1% Loss: 0.1882 Second Model: Accuracy: 98.5% Loss: 0.0997 Third Model: Accuracy: 99.1% Loss: 0.2544 My questions are: What is the relationship between the loss and accuracy values? Why the loss of the third model is the higher even though the accuracy is higher? | There is no relationship between these two metrics. Loss can be seen as a distance between the true values of the problem and the values predicted by the model. Greater the loss is, more huge is the errors you made on the data. Accuracy can be seen as the number of error you made on the data. That means: a low accuracy and huge loss means you made huge errors on a lot of data a low accuracy but low loss means you made little errors on a lot of data a great accuracy with low loss means you made low errors on a few data (best case) your situation: a great accuracy but a huge loss, means you made huge errors on a few data. For you case, the third model can correctly predict more examples, but on those where it was wrong, it made more errors (the distance between true value and predicted values is more huge). NOTE: Don't forget that low or huge loss is a subjective metric, which depends on the problem and the data. It's a distance between the true value of the prediction, and the prediction made by the model. It depends also on the loss you use. Think: If your data are between 0 and 1, a loss of 0.5 is huge, but if your data are between 0 and 255, an error of 0.5 is low. Maybe think of cancer detection, and probability of detecting a cancer. Maybe an error of 0.1 is huge for this problem, whereas an error f 0.1 for image classification is fine. | {
"source": [
"https://datascience.stackexchange.com/questions/42599",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/51129/"
]
} |
45,165 | I want to compute the precision, recall and F1-score for my binary KerasClassifier model, but don't find any solution. Here's my actual code: # Split dataset in train and test data
X_train, X_test, Y_train, Y_test = train_test_split(normalized_X, Y, test_size=0.3, random_state=seed)
# Build the model
model = Sequential()
model.add(Dense(23, input_dim=45, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
tensorboard = TensorBoard(log_dir="logs/{}".format(time.time()))
time_callback = TimeHistory()
# Fit the model
history = model.fit(X_train, Y_train, validation_split=0.3, epochs=200, batch_size=5, verbose=1, callbacks=[tensorboard, time_callback]) And then I am predicting on new test data, and getting the confusion matrix like this: y_pred = model.predict(X_test)
y_pred =(y_pred>0.5)
list(y_pred)
cm = confusion_matrix(Y_test, y_pred)
print(cm) But is there any solution to get the accuracy-score, the F1-score, the precision, and the recall? (If not complicated, also the cross-validation-score, but not necessary for this answer) Thank you for any help! | Metrics have been removed from Keras core. You need to calculate them manually. They removed them on 2.0 version . Those metrics are all global metrics, but Keras works in batches. As a result, it might be more misleading than helpful. However, if you really need them, you can do it like this from keras import backend as K
def recall_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_m(y_true, y_pred):
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])
# fit the model
history = model.fit(Xtrain, ytrain, validation_split=0.3, epochs=10, verbose=0)
# evaluate the model
loss, accuracy, f1_score, precision, recall = model.evaluate(Xtest, ytest, verbose=0) | {
"source": [
"https://datascience.stackexchange.com/questions/45165",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/59446/"
]
} |
47,142 | I want to know whether Gradient descent is the main algorithm used in optimizers like Adam, Adagrad, RMSProp and several other optimizers. | No. Gradient descent is used in optimization algorithms that use the gradient as the basis of its step movement. Adam , Adagrad , and RMSProp all use some form of gradient descent, however they do not make up every optimizer. Evolutionary algorithms such as Particle Swarm Optimization and Genetic Algorithms are inspired by natural phenomena do not use gradients. Other algorithms, such as Bayesian Optimization , draw inspiration from statistics. Check out this visualization of Bayesian Optimization in action: There are also a few algorithms that combine concepts from evolutionary and gradient-based optimization. Non-derivative based optimization algorithms can be especially useful in irregular non-convex cost functions, non-differentiable cost functions, or cost functions that have a different left or right derivative . To understand why one may choose a non-derivative based optimization algorithm. Take a look at the Rastrigin benchmark function . Gradient based optimization is not well suited for optimizing functions with so many local minima. | {
"source": [
"https://datascience.stackexchange.com/questions/47142",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/41591/"
]
} |
48,369 | I have a dataset with 3 classes with the following items: Class 1: 900 elements Class 2: 15000 elements Class 3: 800 elements I need to predict class 1 and class 3, which signal important deviations from the norm. Class 2 is the default “normal” case which I don’t care about. What kind of loss function would I use here? I was thinking of using CrossEntropyLoss, but since there is a class imbalance, this would need to be weighted I suppose? How does that work in practice? Like this (using PyTorch)? summed = 900 + 15000 + 800
weight = torch.tensor([900, 15000, 800]) / summed
crit = nn.CrossEntropyLoss(weight=weight) Or should the weight be inverted? i.e. 1 / weight? Is this the right approach to begin with or are there other / better methods I could use? Thanks | What kind of loss function would I use here? Cross-entropy is the go-to loss function for classification tasks, either balanced or imbalanced. It is the first choice when no preference is built from domain knowledge yet. This would need to be weighted I suppose? How does that work in practice? Yes. Weight of class $c$ is the size of largest class divided by the size of class $c$ . For example, If class 1 has 900, class 2 has 15000, and class 3 has 800 samples, then their weights would be 16.67, 1.0, and 18.75 respectively. You can also use the smallest class as nominator, which gives 0.889, 0.053, and 1.0 respectively. This is only a re-scaling, the relative weights are the same. Is this the right approach to begin with or are there other / better
methods I could use? Yes, this is the right approach. EDIT : Thanks to @Muppet, we can also use class over-sampling, which is equivalent to using class weights . This is accomplished by WeightedRandomSampler in PyTorch, using the same aforementioned weights. | {
"source": [
"https://datascience.stackexchange.com/questions/48369",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/67135/"
]
} |
48,531 | I'm currently working as a data scientist at a large company (my first job as a DS, so this question may be a result of my lack of experience). They have a huge backlog of really important data science projects that would have a great positive impact if implemented. But. Data pipelines are non-existent within the company, the standard procedure is for them to hand me gigabytes of TXT files whenever I need some information. Think of these files as tabular logs of transactions stored in arcane notation and structure. No whole piece of information is contained in one single data source, and they can't grant me access to their ERP database for "security reasons". Initial data analysis for the simplest project requires brutal, excruciating data wrangling. More than 80% of a project's time spent is me trying to parse these files and cross data sources in order to build viable datasets. This is not a problem of simply handling missing data or preprocessing it, it's about the work it takes to build data that can be handled in the first place ( solvable by dba or data engineering, not data science? ). 1) Feels like most of the work is not related to data science at all. Is this accurate? 2) I know this is not a data-driven company with a high-level data engineering department, but it is my opinion that in order to build for a sustainable future of data science projects, minimum levels of data accessibility are required . Am I wrong? 3) Is this type of setup common for a company with serious data science needs? | Feels like most of the work is not related to data science at all. Is this accurate? Yes I know this is not a data-driven company with a high-level data engineering department, but it is my opinion that data science requires minimum levels of data accessibility. Am I wrong? You're not wrong, but such are the realities of real life. Is this type of setup common for a company with serious data science needs? Yes From a technical standpoint, you need to look into ETL solutions that can make your life easier. Sometimes one tool can be much faster than another to read certain data. E.g. R's readxl is orders of mangnitudes faster than python's pandas at reading xlsx files; you could use R to import the files, then save them to a Python-friendly format (parquet, SQL, etc). I know you're not working on xlsx files and I have no idea if you use Python - it was just an example. From a practical standpoint, two things: First of all, understand what is technically possible. In many cases,
the people telling you know are IT-illiterate people who worry about
business or compliance considerations, but have no concept of what is
and isn't feasible from an IT standpoint. Try to speak to the DBAs or
to whoever manages the data infrastructure. Understand what is
technically possible. THEN, only then, try to find a compromise. E.g.
they won't give you access to their system, but I presume there is a
database behind it? Maybe they can extract the data to some other
formats? Maybe they can extract the SQL statements that define the
data types etc? Business people are more likely to help you if you can make the case that doing so is in THEIR interest. If they don't even believe in what you're doing, tough luck... | {
"source": [
"https://datascience.stackexchange.com/questions/48531",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/53204/"
]
} |
48,796 | If I like to write a LSTM network and feed it by different input array sizes, how is it possible? For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes? I am using Keras implementation of LSTM . | The easiest way is to use Padding and Masking . There are three general ways to handle variable-length sequences: Padding and masking (which can be used for (3)), Batch size = 1, and Batch size > 1, with equi-length samples in each batch. Padding and masking In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then X = [
[[1, 1.1],
[0.9, 0.95]], # sequence 1 (2 timestamps)
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]], # sequence 2 (3 timestamps)
] will be converted to X2 = [
[[1, 1.1],
[0.9, 0.95],
[-10, -10]], # padded sequence 1 (3 timestamps)
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]], # sequence 2 (3 timestamps)
] This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end. For cases (2) and (3) you need to set the seq_len of LSTM to None , e.g. model.add(LSTM(units, input_shape=(None, dimension))) this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit ). I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g. model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model.add(LSTM(lstm_units)) where first dimension of input_shape in Masking is again None to allow batches with different lengths. Here is the code for cases (1) and (2): from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np
class MyBatchGenerator(Sequence):
'Generates data for Keras'
def __init__(self, X, y, batch_size=1, shuffle=True):
'Initialization'
self.X = X
self.y = y
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.y)/self.batch_size))
def __getitem__(self, index):
return self.__data_generation(index)
def on_epoch_end(self):
'Shuffles indexes after each epoch'
self.indexes = np.arange(len(self.y))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, index):
Xb = np.empty((self.batch_size, *self.X[index].shape))
yb = np.empty((self.batch_size, *self.y[index].shape))
# naively use the same sample over and over again
for s in range(0, self.batch_size):
Xb[s] = X[index]
yb[s] = y[index]
return Xb, yb
# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
# create sequence lengths between 1 to 10
seq_lens = np.random.randint(1, 10, halfN)
X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]
# Batch = 1
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)
# Padding and Masking
special_value = -10.0
max_seq_len = max(seq_lens)
Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
for s, x in enumerate(X):
seq_len = x.shape[0]
Xpad[s, 0:seq_len, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
model2.add(LSTM(lstm_units))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary())
model2.fit(Xpad, y, epochs=50, batch_size=32) Extra notes Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask. | {
"source": [
"https://datascience.stackexchange.com/questions/48796",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/67605/"
]
} |
49,522 | I was going through BERT paper which uses GELU (Gaussian Error Linear Unit) which states equation as $$ GELU(x) = xP(X ≤ x) = xΦ(x).$$ which in turn is approximated to $$0.5x(1 + tanh[\sqrt{
2/π}(x + 0.044715x^3)])$$ Could you simplify the equation and explain how it has been approximated. | GELU function We can expand the cumulative distribution of $\mathcal{N}(0, 1)$ , i.e. $\Phi(x)$ , as follows: $$\text{GELU}(x):=x{\Bbb P}(X \le x)=x\Phi(x)=0.5x\left(1+\text{erf}\left(\frac{x}{\sqrt{2}}\right)\right)$$ Note that this is a definition , not an equation (or a relation). Authors have provided some justifications for this proposal, e.g. a stochastic analogy , however mathematically, this is just a definition. Here is the plot of GELU: Tanh approximation For these type of numerical approximations, the key idea is to find a similar function (primarily based on experience), parameterize it, and then fit it to a set of points from the original function. Knowing that $\text{erf}(x)$ is very close to $\text{tanh}(x)$ and first derivative of $\text{erf}(\frac{x}{\sqrt{2}})$ coincides with that of $\text{tanh}(\sqrt{\frac{2}{\pi}}x)$ at $x=0$ , which is $\sqrt{\frac{2}{\pi}}$ , we proceed to fit $$\text{tanh}\left(\sqrt{\frac{2}{\pi}}(x+ax^2+bx^3+cx^4+dx^5)\right)$$ (or with more terms) to a set of points $\left(x_i, \text{erf}\left(\frac{x_i}{\sqrt{2}}\right)\right)$ . I have fitted this function to 20 samples between $(-1.5, 1.5)$ ( using this site ), and here are the coefficients: By setting $a=c=d=0$ , $b$ was estimated to be $0.04495641$ . With more samples from a wider range (that site only allowed 20), coefficient $b$ will be closer to paper's $0.044715$ . Finally we get $\text{GELU}(x)=x\Phi(x)=0.5x\left(1+\text{erf}\left(\frac{x}{\sqrt{2}}\right)\right)\simeq 0.5x\left(1+\text{tanh}\left(\sqrt{\frac{2}{\pi}}(x+0.044715x^3)\right)\right)$ with mean squared error $\sim 10^{-8}$ for $x \in [-10, 10]$ . Note that if we did not utilize the relationship between the first derivatives, term $\sqrt{\frac{2}{\pi}}$ would have been included in the parameters as follows $$0.5x\left(1+\text{tanh}\left(0.797885x+0.035677x^3\right)\right)$$ which is less beautiful (less analytical, more numerical)! Utilizing the parity As suggested by @BookYourLuck , we can utilize the parity of functions to restrict the space of polynomials in which we search. That is, since $\text{erf}$ is an odd function, i.e. $f(-x)=-f(x)$ , and $\text{tanh}$ is also an odd function, polynomial function $\text{pol}(x)$ inside $\text{tanh}$ should also be odd (should only have odd powers of $x$ ) to have $$\text{erf}(-x)\simeq\text{tanh}(\text{pol}(-x))=\text{tanh}(-\text{pol}(x))=-\text{tanh}(\text{pol}(x))\simeq-\text{erf}(x)$$ Previously, we were fortunate to end up with (almost) zero coefficients for even powers $x^2$ and $x^4$ , however in general, this might lead to low quality approximations that, for example, have a term like $0.23x^2$ that is being cancelled out by extra terms (even or odd) instead of simply opting for $0x^2$ . Sigmoid approximation A similar relationship holds between $\text{erf}(x)$ and $2\left(\sigma(x)-\frac{1}{2}\right)$ (sigmoid), which is proposed in the paper as another approximation, with mean squared error $\sim 10^{-4}$ for $x \in [-10, 10]$ . Here is a Python code for generating data points, fitting the functions, and calculating the mean squared errors: import math
import numpy as np
import scipy.optimize as optimize
def tahn(xs, a):
return [math.tanh(math.sqrt(2 / math.pi) * (x + a * x**3)) for x in xs]
def sigmoid(xs, a):
return [2 * (1 / (1 + math.exp(-a * x)) - 0.5) for x in xs]
print_points = 0
np.random.seed(123)
# xs = [-2, -1, -.9, -.7, 0.6, -.5, -.4, -.3, -0.2, -.1, 0,
# .1, 0.2, .3, .4, .5, 0.6, .7, .9, 2]
# xs = np.concatenate((np.arange(-1, 1, 0.2), np.arange(-4, 4, 0.8)))
# xs = np.concatenate((np.arange(-2, 2, 0.5), np.arange(-8, 8, 1.6)))
xs = np.arange(-10, 10, 0.001)
erfs = np.array([math.erf(x/math.sqrt(2)) for x in xs])
ys = np.array([0.5 * x * (1 + math.erf(x/math.sqrt(2))) for x in xs])
# Fit tanh and sigmoid curves to erf points
tanh_popt, _ = optimize.curve_fit(tahn, xs, erfs)
print('Tanh fit: a=%5.5f' % tuple(tanh_popt))
sig_popt, _ = optimize.curve_fit(sigmoid, xs, erfs)
print('Sigmoid fit: a=%5.5f' % tuple(sig_popt))
# curves used in https://mycurvefit.com:
# 1. sinh(sqrt(2/3.141593)*(x+a*x^2+b*x^3+c*x^4+d*x^5))/cosh(sqrt(2/3.141593)*(x+a*x^2+b*x^3+c*x^4+d*x^5))
# 2. sinh(sqrt(2/3.141593)*(x+b*x^3))/cosh(sqrt(2/3.141593)*(x+b*x^3))
y_paper_tanh = np.array([0.5 * x * (1 + math.tanh(math.sqrt(2/math.pi)*(x + 0.044715 * x**3))) for x in xs])
tanh_error_paper = (np.square(ys - y_paper_tanh)).mean()
y_alt_tanh = np.array([0.5 * x * (1 + math.tanh(math.sqrt(2/math.pi)*(x + tanh_popt[0] * x**3))) for x in xs])
tanh_error_alt = (np.square(ys - y_alt_tanh)).mean()
# curve used in https://mycurvefit.com:
# 1. 2*(1/(1+2.718281828459^(-(a*x))) - 0.5)
y_paper_sigmoid = np.array([x * (1 / (1 + math.exp(-1.702 * x))) for x in xs])
sigmoid_error_paper = (np.square(ys - y_paper_sigmoid)).mean()
y_alt_sigmoid = np.array([x * (1 / (1 + math.exp(-sig_popt[0] * x))) for x in xs])
sigmoid_error_alt = (np.square(ys - y_alt_sigmoid)).mean()
print('Paper tanh error:', tanh_error_paper)
print('Alternative tanh error:', tanh_error_alt)
print('Paper sigmoid error:', sigmoid_error_paper)
print('Alternative sigmoid error:', sigmoid_error_alt)
if print_points == 1:
print(len(xs))
for x, erf in zip(xs, erfs):
print(x, erf) Output: Tanh fit: a=0.04485
Sigmoid fit: a=1.70099
Paper tanh error: 2.4329173471294176e-08
Alternative tanh error: 2.698034519269613e-08
Paper sigmoid error: 5.6479106346814546e-05
Alternative sigmoid error: 5.704246564663601e-05 | {
"source": [
"https://datascience.stackexchange.com/questions/49522",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/50406/"
]
} |
51,065 | I'm trying to read and understand the paper Attention is all you need and in it, there is a picture: I don't know what positional encoding is. by listening to some youtube videos I've found out that it is an embedding having both meaning and position of a word in it and has something to do with $sin(x)$ or $cos(x)$ but I couldn't understand what exactly it is and how exactly it is doing that. so I'm here for some help. thanks in advance. | For example, for word $w$ at position $pos \in [0, L-1]$ in the input sequence $\boldsymbol{w}=(w_0,\cdots, w_{L-1})$ , with 4-dimensional embedding $e_{w}$ , and $d_{model}=4$ , the operation would be $$\begin{align*}e_{w}' &= e_{w} + \left[sin\left(\frac{pos}{10000^{0}}\right), cos\left(\frac{pos}{10000^{0}}\right),sin\left(\frac{pos}{10000^{2/4}}\right),cos\left(\frac{pos}{10000^{2/4}}\right)\right]\\
&=e_{w} + \left[sin\left(pos\right), cos\left(pos\right),sin\left(\frac{pos}{100}\right),cos\left(\frac{pos}{100}\right)\right]\\
\end{align*}$$ where the formula for positional encoding is as follows $$\text{PE}(pos,2i)=sin\left(\frac{pos}{10000^{2i/d_{model}}}\right),$$ $$\text{PE}(pos,2i+1)=cos\left(\frac{pos}{10000^{2i/d_{model}}}\right).$$ with $d_{model}=512$ (thus $i \in [0, 255]$ ) in the original paper. This technique is used because there is no notion of word order (1st word, 2nd word, ..) in the proposed architecture. All words of input sequence are fed to the network with no special order or position; in contrast, in RNN architecture, $n$ -th word is fed at step $n$ , and in ConvNet, it is fed to specific input indices. Therefore, proposed model has no idea how the words are ordered. Consequently, a position-dependent signal is added to each word-embedding to help the model incorporate the order of words. Based on experiments, this addition not only avoids destroying the embedding information but also adds the vital position information. This blog by Kazemnejad explains that the specific choice of ( $sin$ , $cos$ ) pair helps the model in learning patterns that rely on relative positions. As an example, consider a pattern like if 'are' comes after 'they', then 'playing' is more likely than 'play' which relies on relative position " $pos(\text{are}) - pos(\text{they})$ " being 1, independent of absolute positions $pos(\text{are})$ and $pos(\text{they})$ . To learn this pattern, any positional encoding should make it easy for the model to arrive at an encoding for "they are" that (a) is different from "are they" (considers relative position), and (b) is independent of where "they are" occurs in a given sequence (ignores absolute positions), which is what $\text{PE}$ manages to achieve. This article by Jay Alammar explains the paper with excellent visualizations. The example on positional encoding calculates $\text{PE}(.)$ the same, with the only difference that it puts $sin$ in the first half of embedding dimensions (as opposed to even indices) and $cos$ in the second half (as opposed to odd indices). As pointed out by ShaohuaLi , this difference does not matter since vector operations would be invariant to the permutation of dimensions. | {
"source": [
"https://datascience.stackexchange.com/questions/51065",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/65027/"
]
} |
51,084 | I would like to train different machine learning algorithms (SVM, Random Forest, CNN etc.) for the same data set (e.g. MNIST) und then compare their accuracies.
The goal would be to find out from which training data size which method is preferable to the others.
To do this I continuously reduce the original training data set (of 60000 samples) and train the models on these reduced trainings data sets. If I then determine the accuracy using the original MNIST-test dataset (10000 samples), of course I will get overfitting, e.g. with a training data set of 1000 samples I get a training accuracy of 95% and test accuracy of 75%. The smaller the training data set, the lower the test accuracy, while the training accuracy remains at about the same level. Would it make sense also to reduce the test data set to restore the original 1:6 ratio of the test set : training set?
Personally, I think that does not make sense.
Or have I thought incorrectly about that? | For example, for word $w$ at position $pos \in [0, L-1]$ in the input sequence $\boldsymbol{w}=(w_0,\cdots, w_{L-1})$ , with 4-dimensional embedding $e_{w}$ , and $d_{model}=4$ , the operation would be $$\begin{align*}e_{w}' &= e_{w} + \left[sin\left(\frac{pos}{10000^{0}}\right), cos\left(\frac{pos}{10000^{0}}\right),sin\left(\frac{pos}{10000^{2/4}}\right),cos\left(\frac{pos}{10000^{2/4}}\right)\right]\\
&=e_{w} + \left[sin\left(pos\right), cos\left(pos\right),sin\left(\frac{pos}{100}\right),cos\left(\frac{pos}{100}\right)\right]\\
\end{align*}$$ where the formula for positional encoding is as follows $$\text{PE}(pos,2i)=sin\left(\frac{pos}{10000^{2i/d_{model}}}\right),$$ $$\text{PE}(pos,2i+1)=cos\left(\frac{pos}{10000^{2i/d_{model}}}\right).$$ with $d_{model}=512$ (thus $i \in [0, 255]$ ) in the original paper. This technique is used because there is no notion of word order (1st word, 2nd word, ..) in the proposed architecture. All words of input sequence are fed to the network with no special order or position; in contrast, in RNN architecture, $n$ -th word is fed at step $n$ , and in ConvNet, it is fed to specific input indices. Therefore, proposed model has no idea how the words are ordered. Consequently, a position-dependent signal is added to each word-embedding to help the model incorporate the order of words. Based on experiments, this addition not only avoids destroying the embedding information but also adds the vital position information. This blog by Kazemnejad explains that the specific choice of ( $sin$ , $cos$ ) pair helps the model in learning patterns that rely on relative positions. As an example, consider a pattern like if 'are' comes after 'they', then 'playing' is more likely than 'play' which relies on relative position " $pos(\text{are}) - pos(\text{they})$ " being 1, independent of absolute positions $pos(\text{are})$ and $pos(\text{they})$ . To learn this pattern, any positional encoding should make it easy for the model to arrive at an encoding for "they are" that (a) is different from "are they" (considers relative position), and (b) is independent of where "they are" occurs in a given sequence (ignores absolute positions), which is what $\text{PE}$ manages to achieve. This article by Jay Alammar explains the paper with excellent visualizations. The example on positional encoding calculates $\text{PE}(.)$ the same, with the only difference that it puts $sin$ in the first half of embedding dimensions (as opposed to even indices) and $cos$ in the second half (as opposed to odd indices). As pointed out by ShaohuaLi , this difference does not matter since vector operations would be invariant to the permutation of dimensions. | {
"source": [
"https://datascience.stackexchange.com/questions/51084",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/62073/"
]
} |
52,015 | I'm fairly new at computer vision and I've read an explanation at a medium post , however it still isn't clear for me how they truly differ. | Object Detection : is the technology that is related to computer vision and image processing. Its aim? detect objects in an image. Semantic Segmentation : is a technique that detects , for each pixel , the object category it belongs to. All object categories ( labels ) must be known to the model. Instance Segmentation : same as Semantic Segmentation, but dives a bit deeper, it identifies , for each pixel, the object instance it belongs to. The main difference is that it differentiates between two objects with the same label. Here's an example of the main difference. In the second image where Semantic Segmentation is applied, the category ( chair ) is the output class, all chairs are colored the same. In the third image, the Instance Segmentation , goes a step further and separates the instances ( the chairs ) from one another in addition to identifying the category ( chair ) in the first step. Hope this clears it up for you a bit. | {
"source": [
"https://datascience.stackexchange.com/questions/52015",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/74267/"
]
} |
52,023 | The response variable in a regression problem, $Y$ , is modeled using a data matrix $X$ . In notation, this means: $Y$ ~ $X$ However, $Y$ can be separated out into different components that can be modeled independently. $$Y = Y_1 + Y_2 + Y_3$$ Under what conditions would $M$ , the overall prediction, have better or worse performance than $M_1 + M_2 + M_3$ , a sum of individual models? To provide more background, the model used is a GBM. I was surprised to find that training a model for a specific $Y_i$ resulted in about equal performance than using the overall model $M$ to predict that $Y_i$ . The $Y_i$ 's are highly correlated. In hindsight, then this is not surprising because training a model for a vector correlated with the target also is correlated with the target. For analogy, take the case with a linear model and independent response variables. The overall model is $Y = X\beta$ It is trivial to see that the sum of the models is the model of the sum. $Y = X\beta = X\beta_1 + X\beta_2 + X\beta_3 = X(\beta_1 + \beta_2 + \beta_3)$ If the $Y$ 's are independent then the $\beta$ 's will be as well. This implies that each of the model coefficients will be unchanged. Take for example a two-dimensional case (where $X$ has two columns). For $i \neq j$ , $Y_i = X(\beta_i + \beta_j) = X\beta_j + 0$ | Object Detection : is the technology that is related to computer vision and image processing. Its aim? detect objects in an image. Semantic Segmentation : is a technique that detects , for each pixel , the object category it belongs to. All object categories ( labels ) must be known to the model. Instance Segmentation : same as Semantic Segmentation, but dives a bit deeper, it identifies , for each pixel, the object instance it belongs to. The main difference is that it differentiates between two objects with the same label. Here's an example of the main difference. In the second image where Semantic Segmentation is applied, the category ( chair ) is the output class, all chairs are colored the same. In the third image, the Instance Segmentation , goes a step further and separates the instances ( the chairs ) from one another in addition to identifying the category ( chair ) in the first step. Hope this clears it up for you a bit. | {
"source": [
"https://datascience.stackexchange.com/questions/52023",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/68433/"
]
} |
52,157 | I'm not sure why you need to multiply by $\frac1{2m}$ in the beginning. I understand that you would have to divide the whole sum by $\frac1{m}$ , but why do we have to multiply $m$ by two? Is it because we have two $\theta$ here in the example? | It is simple. It is because when you take the derivative of the cost function, that is used in updating the parameters during gradient descent, that $2$ in the power get cancelled with the $\frac{1}{2}$ multiplier, thus the derivation is cleaner. These techniques are or somewhat similar are widely used in math in order "To make the derivations mathematically more convenient". You can simply remove the multiplier, see here for example, and expect the same result. | {
"source": [
"https://datascience.stackexchange.com/questions/52157",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/74441/"
]
} |
52,632 | I have a doubt regarding the cross validation approach and train-validation-test approach. I was told that I can split a dataset into 3 parts: Train: we train the model. Validation: we validate and adjust model parameters. Test: never seen before data. We get an unbiased final estimate. So far, we have split into three subsets. Until here everything is okay. Attached is a picture: Then I came across the K-fold cross validation approach and what I don’t understand is how I can relate the Test subset from the above approach. Meaning, in 5-fold cross validation we split the data into 5 and in each iteration the non-validation subset is used as the train subset and the validation is used as test set. But, in terms of the above mentioned example, where is the validation part in k-fold cross validation? We either have validation or test subset. When I refer myself to train/validation/test, that “test” is the scoring: Model development is generally a two-stage process. The first stage is training and validation, during which you apply algorithms to data for which you know the outcomes to uncover patterns between its features and the target variable. The second stage is scoring, in which you apply the trained model to a new dataset. Then, it returns outcomes in the form of probability scores for classification problems and estimated averages for regression problems. Finally, you deploy the trained model into a production application or use the insights it uncovers to improve business processes. As an example, I found the Sci-Kit learn cross validation version as you can see in the following picture: When doing the splitting, you can see that the algorithm that they give you, only takes care of the training part of the original dataset. So, in the end, we are not able to perform the Final evaluation process as you can see in the attached picture. Thank you! scikitpage | If k-fold cross-validation is used to optimize the model parameters, the training set is split into k parts. Training happens k times, each time leaving out a different part of the training set. Typically, the error of these k-models is averaged. This is done for each of the model parameters to be tested, and the model with the lowest error is chosen. The test set has not been used so far. Only at the very end the test set is used to test the performance of the (optimized) model. # example: k-fold cross validation for hyperparameter optimization (k=3)
original data split into training and test set:
|---------------- train ---------------------| |--- test ---|
cross-validation: test set is not used, error is calculated from
validation set (k-times) and averaged:
|---- train ------------------|- validation -| |--- test ---|
|---- train ---|- validation -|---- train ---| |--- test ---|
|- validation -|----------- train -----------| |--- test ---|
final measure of model performance: model is trained on all training data
and the error is calculated from test set:
|---------------- train ---------------------|--- test ---| In some cases, k-fold cross-validation is used on the entire data set if no parameter optimization is needed (this is rare, but it happens). In this case there would not be a validation set and the k parts are used as a test set one by one. The error of each of these k tests is typically averaged. # example: k-fold cross validation
|----- test -----|------------ train --------------|
|----- train ----|----- test -----|----- train ----|
|------------ train --------------|----- test -----| | {
"source": [
"https://datascience.stackexchange.com/questions/52632",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/73650/"
]
} |
52,647 | I am trying to guess a home price and at the end I intend to figure out a formula by using linear regression. As you can see here , I have 1480 samples with 45 features in which price (fiyat) is the target variable. Do higher values for $RMSE$ and $MAE$ mean that the dataset cannot be trained in a good manner? Is there a way to reduce these values?
As you can see $R2$ seems well. How can we say that how much percentage of error occurs for the guesses on average? If we can how much could you say? There is really substantial difference between the prices and guesses as being seen below: | If k-fold cross-validation is used to optimize the model parameters, the training set is split into k parts. Training happens k times, each time leaving out a different part of the training set. Typically, the error of these k-models is averaged. This is done for each of the model parameters to be tested, and the model with the lowest error is chosen. The test set has not been used so far. Only at the very end the test set is used to test the performance of the (optimized) model. # example: k-fold cross validation for hyperparameter optimization (k=3)
original data split into training and test set:
|---------------- train ---------------------| |--- test ---|
cross-validation: test set is not used, error is calculated from
validation set (k-times) and averaged:
|---- train ------------------|- validation -| |--- test ---|
|---- train ---|- validation -|---- train ---| |--- test ---|
|- validation -|----------- train -----------| |--- test ---|
final measure of model performance: model is trained on all training data
and the error is calculated from test set:
|---------------- train ---------------------|--- test ---| In some cases, k-fold cross-validation is used on the entire data set if no parameter optimization is needed (this is rare, but it happens). In this case there would not be a validation set and the k parts are used as a test set one by one. The error of each of these k tests is typically averaged. # example: k-fold cross validation
|----- test -----|------------ train --------------|
|----- train ----|----- test -----|----- train ----|
|------------ train --------------|----- test -----| | {
"source": [
"https://datascience.stackexchange.com/questions/52647",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/73766/"
]
} |
52,653 | Here is a simple polynomial equation: b^2 + 2b + 1 = 0 I could easily solve this as: import numpy as np
from scipy.optimize import fsolve
eq = lambda b : np.power(b,2) + 2*b + 1
fsolve(eq, np.linspace(0,1,2)) Similarly, I could solve any equation that has finite number of terms. But how do I solve an equation with infinite number of terms which is given as: $G_t^{\lambda}=(1-\lambda) \sum \limits_{n=1}^{\infty} \lambda^{n-1}G_{t:t+n}$ The above equation could be written as: (1 - l) * (5.5 + 4.0*l + 4*l^2 + 6*l^3 + 5*l^4 + 5*l^5 + 5*l^6 + 5*l^7 + 5*l^8 + 5*l^9 + 5*l^10 ) = 5 when n goes from 1 to 10. But I want to solve this for sufficiently large value of n such that LHS ~= RHS. I know the values of LHS and G1 -> Ginf but cannot understand how could I compute the value of lambda here. I tried looking at numpy polynomial functions but could not find a function that is relevant here. | If k-fold cross-validation is used to optimize the model parameters, the training set is split into k parts. Training happens k times, each time leaving out a different part of the training set. Typically, the error of these k-models is averaged. This is done for each of the model parameters to be tested, and the model with the lowest error is chosen. The test set has not been used so far. Only at the very end the test set is used to test the performance of the (optimized) model. # example: k-fold cross validation for hyperparameter optimization (k=3)
original data split into training and test set:
|---------------- train ---------------------| |--- test ---|
cross-validation: test set is not used, error is calculated from
validation set (k-times) and averaged:
|---- train ------------------|- validation -| |--- test ---|
|---- train ---|- validation -|---- train ---| |--- test ---|
|- validation -|----------- train -----------| |--- test ---|
final measure of model performance: model is trained on all training data
and the error is calculated from test set:
|---------------- train ---------------------|--- test ---| In some cases, k-fold cross-validation is used on the entire data set if no parameter optimization is needed (this is rare, but it happens). In this case there would not be a validation set and the k parts are used as a test set one by one. The error of each of these k tests is typically averaged. # example: k-fold cross validation
|----- test -----|------------ train --------------|
|----- train ----|----- test -----|----- train ----|
|------------ train --------------|----- test -----| | {
"source": [
"https://datascience.stackexchange.com/questions/52653",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/73104/"
]
} |
54,908 | Which one is the right approach to make data normalization - before or after train-test split? Normalization before split normalized_X_features = pd.DataFrame(
StandardScaler().fit_transform(X_features),
columns = X_features.columns
)
x_train, x_test, y_train, y_test = train_test_split(
normalized_X_features,
Y_feature,
test_size=0.20,
random_state=4
)
LR = LogisticRegression(
C=0.01,
solver='liblinear'
).fit(x_train, y_train)
y_test_pred = LR.predict(x_test) Normalization after split x_train, x_test, y_train, y_test = train_test_split(
X_features,
Y_feature,
test_size=0.20,
random_state=4
)
normalized_x_train = pd.DataFrame(
StandardScaler().fit_transform(x_train),
columns = x_train.columns
)
LR = LogisticRegression(
C=0.01,
solver='liblinear'
).fit(normalized_x_train, y_train)
normalized_x_test = pd.DataFrame(
StandardScaler().fit_transform(x_test),
columns = x_test.columns
)
y_test_pred = LR.predict(normalized_x_test) So far I have seen both approaches. | Normalization across instances should be done after splitting the data between training and test set, using only the data from the training set. This is because the test set plays the role of fresh unseen data, so it's not supposed to be accessible at the training stage. Using any information coming from the test set before or during training is a potential bias in the evaluation of the performance. [Precision thanks to Neil's comment] When normalizing the test set, one should apply the normalization parameters previously obtained from the training set as-is. Do not recalculate them on the test set, because they would be inconsistent with the model and this would produce wrong predictions. | {
"source": [
"https://datascience.stackexchange.com/questions/54908",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/69963/"
]
} |
55,545 | I have been doing this online course Introduction to TensorFlow for AI, ML and DL . Here in one part, they were showing a CNN model for classifying human and horses. In this model, the first Conv2D layer had 16 filters , followed by two more Conv2D layers with 32 and 64 filters respectively. I am not sure how the number of filters is correlated with the deeper convolution layers. | For this you need to understand what filters actually do. Every layer of filters is there to capture patterns. For example, the first layer of filters captures patterns like edges, corners, dots etc. Subsequent layers combine those patterns to make bigger patterns (like combining edges to make squares, circles, etc.). Now as we move forward in the layers, the patterns get more complex; hence there are larger combinations of patterns to capture . That's why we increase the filter size in subsequent layers to capture as many combinations as possible. | {
"source": [
"https://datascience.stackexchange.com/questions/55545",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/52350/"
]
} |
58,845 | Using tensorflow-gpu 2.0.0rc0. I want to choose whether it uses the GPU or the CPU. | I've seen some suggestions elsewhere, but they are old and do not apply very well to newer TF versions. What worked for me was this: import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1" When that variable is defined and equal to -1, TF uses the CPU even when a CUDA GPU is available. | {
"source": [
"https://datascience.stackexchange.com/questions/58845",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/23556/"
]
} |
60,950 | MinMaxScaler() in scikit-learn is used for data normalization (a.k.a feature scaling). Data normalization is not necessary for decision trees. Since XGBoost is based on decision trees, is it necessary to do data normalization using MinMaxScaler() for data to be fed to XGBoost machine learning models? | Your rationale is indeed correct: decision trees do not require normalization of their inputs; and since XGBoost is essentially an ensemble algorithm comprised of decision trees, it does not require normalization for the inputs either. For corroboration, see also the thread Is Normalization necessary? at the XGBoost Github repo, where the answer by the lead XGBoost developer is a clear: no you do not have to normalize the features | {
"source": [
"https://datascience.stackexchange.com/questions/60950",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/57724/"
]
} |
64,278 | I was reading an article about convolutional neural networks, and I found something that I don't understand, which is: The filter must have the same number of channels as the input image so that the element-wise multiplication can take place. Now, what I don't understand is: What is a channel in a convolutional neural network? I have tried looking for the answer, but can't understand what is it yet. Can someone explain it to me? Thanks in advance. | Let's assume that we are talking about 2D convolutions applied on images. In a grayscale image, the data is a matrix of dimensions $w \times h$ , where $w$ is the width of the image and $h$ is its height. In a color image, we normally have 3 channels : red, green and blue; this way, a color image can be represented as a matrix of dimensions $w \times h \times c$ , where $c$ is the number of channels, that is, 3. A convolution layer receives the image ( $w \times h \times c$ ) as input, and generates as output an activation map of dimensions $w' \times h' \times c'$ . The number of input channels in the convolution is $c$ , while the number of output channels is $c'$ . The filter for such a convolution is a tensor of dimensions $f \times f \times c \times c'$ , where $f$ is the filter size (normally 3 or 5). This way, the number of channels is the depth of the matrices involved in the convolutions. Also, a convolution operation defines the variation in such depth by specifying input and output channels. These explanations are directly extrapolable to 1D signals or 3D signals, but the analogy with image channels made it more appropriate to use 2D signals in the example. | {
"source": [
"https://datascience.stackexchange.com/questions/64278",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/84229/"
]
} |
64,441 | As you can see, it is about a binary classification with linearSVC. The class 1 has a higher precision than class 0 (+7%), but class 0 has a higher recall than class 1 (+11%). How would you interpret this? And two other questions: what does "support" stand for? The precision and recall scores in the classification report are different compared to the results of sklearn.metrics.precision_score or recall_score . Why is that so? | The classification report is about key metrics in a classification problem. You'll have precision, recall, f1-score and support for each class you're trying to find. The recall means "how many of this class you find over the whole number of element of this class" The precision will be "how many are correctly classified among that class" The f1-score is the harmonic mean between precision & recall The support is the number of occurence of the given class in your dataset (so you have 37.5K of class 0 and 37.5K of class 1, which is a really well balanced dataset. The thing is, precision and recall is highly used for imbalanced dataset because in an highly imbalanced dataset, a 99% accuracy can be meaningless. I would say that you don't really need to look at these metrics for this problem , unless a given class should absolutely be correctly determined. To answer your other question, you cannot compare the precision and the recall over two classes. This only means you're classifier is better to find class 0 over class 1. Precision and recall of sklearn.metrics.precision_score or recall_score should not be different. But as long as the code is not provided, this is impossible to determine the root cause of this. | {
"source": [
"https://datascience.stackexchange.com/questions/64441",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/-1/"
]
} |
65,736 | Why does Keras need the TensorFlow engine? I am not getting correct directions on why we need Keras. We can use TensorFlow to build a neural network model, but why do most people use Keras with TensorFlow as backend? | This makes more sense when understood in its historical context. These were the chronological events: April 2009 Theano 0.1 is released . It would dominate the deep learning framework scene for many many years. June 2015 Keras is created by François Chollet . The goal was to create an abstraction layer to make Theano easier to use, enabling fast prototyping. August 2015 Google hires François Chollet . November 2015 Tensorflow is released by Google, with much inspiration from Theano and its declarative computational graph paradigm. December 2015 Keras is refactored to allow for pluggable backend engines, and now it offers backend implementations for Theano and Tensorflow. Other backends were later supported by Keras (CNTK, MxNet), but they never got much traction. Time passes by and the overlap between Tensorflow and Keras grows. Tensorflow ends up duplicating many of the functionalities in Keras (apart from the multiple APIs within Tensorflow that also had big overlaps). September 2017 Theano is discontinued . November 2017 Keras is bundled with Tensorflow as tf.keras . From this point on there are 2 different Keras: the one bundled with Tensorflow and the one that supports multiple backend engines. Both are maintained by the same people and are kept in sync at API level. At some point, the roadmap for Tensorflow 2.0 is defined, choosing to pursue an imperative model like PyTorch . The person leading the Tensorflow API refactoring is François Chollet. This refactoring included a reorganization of the functionality to avoid duplications. November 2018 some crucial functionalities of Tensorflow are to be moved to tf.keras , generating a heated debate September 2019 Keras 2.3 is announced to be the last release of the multi-backend version of Keras Now, THE ANSWER to your question: Tensorflow is the most used Keras backend because it is the only one with a relevant user base that is under active development and, furthermore, the only version of Keras that is actively developed and maintained is one with Tensorflow. So, summing up: At the beginning of Keras, the overlap with Tensorflow was small. Tensorflow was a bit difficult to use, and Keras simplified it a lot. Later, Tensorflow incorporated many functionalities similar to Keras'. Keras became less necessary. Then, apart from the multi-backend version, Keras was bundled with Tensorflow. Their separation line blurred over the years. The multi-backend Keras version was discontinued. Now the only Keras is the one bundled with Tensorflow. Update : the relationship between Keras and Tensorflow is best understood with an example: The dependency between Keras and Tensorflow is internal to Keras, it is not exposed to the programmer working with Keras. For example, in the source code of Keras, there is an implementation of a convolutional layer ; this implementation calls package keras.backend to actually run the convolution computation ; depending on the Keras configuration file, this backend is set to use the Tensorflow backend implementation in keras.backend.tensorflow_backend.py ; this Keras file just invokes Tensorflow to compute the convolution Update 2 : new important events in the timeline: August 2021 : Tensorflow 2.6.0 no longer has Keras as part of it . Keras has now its own PIP package ( keras ) and lives on its own github repo . | {
"source": [
"https://datascience.stackexchange.com/questions/65736",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/83473/"
]
} |
65,767 | How do you implement STS(Semantic Textual Similarity) on an unlabelled dataset? The dataset column contains Unique_id , text1 (contains paragraph), and text2 (contains paragraph). Ex: Column representation: Unique_id | Text1 | Text2 Unique_id 0 Text1 public show for Reynolds suspension of his coaching licence. portrait Sir Joshua Reynolds portrait of omai will get a public airing following fears it would stay hidden because of an export wrangle. Text2 then requested to do so by Spain's anti-violence commission. The fine was far less than the expected amount of about £22 000 or even the suspension of his coaching license. Unique_id 1 Text1 Groening. Gervais has already begun writing the script but is keeping its subject matter a closely guarded secret. he will also write a part for himself in the episode. I've got the rough idea but this is the most intimidating project of my career. Text2 Philadelphia said they found insufficient evidence to support the woman s allegations regarding an alleged incident in January 2004. The woman reported the allegations to Canadian authorities last month. Cosby s lawyer Walter m Phillips jr said the comedian was pleased with the decision. In the above problem, I've to compare two paragraphs of texts i.e. Text1 & Text2 , and then I've to compare semantic similarity between two texts. If they are semantically similar then it will print '1' if not then '0' Any reference implementation link or any suggestions! Thanks in advance! | This makes more sense when understood in its historical context. These were the chronological events: April 2009 Theano 0.1 is released . It would dominate the deep learning framework scene for many many years. June 2015 Keras is created by François Chollet . The goal was to create an abstraction layer to make Theano easier to use, enabling fast prototyping. August 2015 Google hires François Chollet . November 2015 Tensorflow is released by Google, with much inspiration from Theano and its declarative computational graph paradigm. December 2015 Keras is refactored to allow for pluggable backend engines, and now it offers backend implementations for Theano and Tensorflow. Other backends were later supported by Keras (CNTK, MxNet), but they never got much traction. Time passes by and the overlap between Tensorflow and Keras grows. Tensorflow ends up duplicating many of the functionalities in Keras (apart from the multiple APIs within Tensorflow that also had big overlaps). September 2017 Theano is discontinued . November 2017 Keras is bundled with Tensorflow as tf.keras . From this point on there are 2 different Keras: the one bundled with Tensorflow and the one that supports multiple backend engines. Both are maintained by the same people and are kept in sync at API level. At some point, the roadmap for Tensorflow 2.0 is defined, choosing to pursue an imperative model like PyTorch . The person leading the Tensorflow API refactoring is François Chollet. This refactoring included a reorganization of the functionality to avoid duplications. November 2018 some crucial functionalities of Tensorflow are to be moved to tf.keras , generating a heated debate September 2019 Keras 2.3 is announced to be the last release of the multi-backend version of Keras Now, THE ANSWER to your question: Tensorflow is the most used Keras backend because it is the only one with a relevant user base that is under active development and, furthermore, the only version of Keras that is actively developed and maintained is one with Tensorflow. So, summing up: At the beginning of Keras, the overlap with Tensorflow was small. Tensorflow was a bit difficult to use, and Keras simplified it a lot. Later, Tensorflow incorporated many functionalities similar to Keras'. Keras became less necessary. Then, apart from the multi-backend version, Keras was bundled with Tensorflow. Their separation line blurred over the years. The multi-backend Keras version was discontinued. Now the only Keras is the one bundled with Tensorflow. Update : the relationship between Keras and Tensorflow is best understood with an example: The dependency between Keras and Tensorflow is internal to Keras, it is not exposed to the programmer working with Keras. For example, in the source code of Keras, there is an implementation of a convolutional layer ; this implementation calls package keras.backend to actually run the convolution computation ; depending on the Keras configuration file, this backend is set to use the Tensorflow backend implementation in keras.backend.tensorflow_backend.py ; this Keras file just invokes Tensorflow to compute the convolution Update 2 : new important events in the timeline: August 2021 : Tensorflow 2.6.0 no longer has Keras as part of it . Keras has now its own PIP package ( keras ) and lives on its own github repo . | {
"source": [
"https://datascience.stackexchange.com/questions/65767",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/87735/"
]
} |
65,773 | I have a data set of districts, farmland area and fertilizer subsidies issued for those areas. I.e, using made up numbers, district | area | subsidy | subsidy per area (computed)
abc | 20 | 500 | 25
cde | 30 | 750 | 25
fgh | 0.02 | 15 | 750 <--- looks off I'm trying to visualise the subsidy per area but in districts that have very small amounts of farming the subsidy per area seems abnormal. The nationwide average is pretty much around 25. So, I can safely say that the subsidy amount is directly related to the area being subsidised, which is to be expected as fertiliser usage is dependent on the area being farmed. My theory is that the exception on small areas is due to there being a minimum subsidy amount irrespective of the land area. Are there any techniques to deal with the above scenario when visualising data? | This makes more sense when understood in its historical context. These were the chronological events: April 2009 Theano 0.1 is released . It would dominate the deep learning framework scene for many many years. June 2015 Keras is created by François Chollet . The goal was to create an abstraction layer to make Theano easier to use, enabling fast prototyping. August 2015 Google hires François Chollet . November 2015 Tensorflow is released by Google, with much inspiration from Theano and its declarative computational graph paradigm. December 2015 Keras is refactored to allow for pluggable backend engines, and now it offers backend implementations for Theano and Tensorflow. Other backends were later supported by Keras (CNTK, MxNet), but they never got much traction. Time passes by and the overlap between Tensorflow and Keras grows. Tensorflow ends up duplicating many of the functionalities in Keras (apart from the multiple APIs within Tensorflow that also had big overlaps). September 2017 Theano is discontinued . November 2017 Keras is bundled with Tensorflow as tf.keras . From this point on there are 2 different Keras: the one bundled with Tensorflow and the one that supports multiple backend engines. Both are maintained by the same people and are kept in sync at API level. At some point, the roadmap for Tensorflow 2.0 is defined, choosing to pursue an imperative model like PyTorch . The person leading the Tensorflow API refactoring is François Chollet. This refactoring included a reorganization of the functionality to avoid duplications. November 2018 some crucial functionalities of Tensorflow are to be moved to tf.keras , generating a heated debate September 2019 Keras 2.3 is announced to be the last release of the multi-backend version of Keras Now, THE ANSWER to your question: Tensorflow is the most used Keras backend because it is the only one with a relevant user base that is under active development and, furthermore, the only version of Keras that is actively developed and maintained is one with Tensorflow. So, summing up: At the beginning of Keras, the overlap with Tensorflow was small. Tensorflow was a bit difficult to use, and Keras simplified it a lot. Later, Tensorflow incorporated many functionalities similar to Keras'. Keras became less necessary. Then, apart from the multi-backend version, Keras was bundled with Tensorflow. Their separation line blurred over the years. The multi-backend Keras version was discontinued. Now the only Keras is the one bundled with Tensorflow. Update : the relationship between Keras and Tensorflow is best understood with an example: The dependency between Keras and Tensorflow is internal to Keras, it is not exposed to the programmer working with Keras. For example, in the source code of Keras, there is an implementation of a convolutional layer ; this implementation calls package keras.backend to actually run the convolution computation ; depending on the Keras configuration file, this backend is set to use the Tensorflow backend implementation in keras.backend.tensorflow_backend.py ; this Keras file just invokes Tensorflow to compute the convolution Update 2 : new important events in the timeline: August 2021 : Tensorflow 2.6.0 no longer has Keras as part of it . Keras has now its own PIP package ( keras ) and lives on its own github repo . | {
"source": [
"https://datascience.stackexchange.com/questions/65773",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/87740/"
]
} |
66,207 | I am reading this article on how to use BERT by Jay Alammar and I understand things up until: For sentence classification, we’re only only interested in BERT’s output for the [CLS] token, so we select that slice of the cube and discard everything else. I have read this topic , but still have some questions: Isn't the [CLS] token at the very beginning of each sentence? Why is that "we are only interested in BERT's output for the [CLS] token"? Can anyone help me get my head around this? Thanks! | CLS stands for classification and its there to represent sentence-level classification. In short in order to make pooling scheme of BERT work this tag was introduced . I suggest reading up on this blog where this is also covered in detail. | {
"source": [
"https://datascience.stackexchange.com/questions/66207",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/42519/"
]
} |
73,093 | In the documentation it has been mentioned that y_pred needs to be in the range of [-inf to inf] when from_logits=True . I truly didn't understand what this means, since the probabilities need to be in the range of 0 to 1! Can someone please explain in simple words the effect of using from_logits=True ? model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy']) | The from_logits=True attribute inform the loss function that the output values generated by the model are not normalized, a.k.a. logits. In other words, the softmax function has not been applied on them to produce a probability distribution. Therefore, the output layer in this case does not have a softmax activation function: out = tf.keras.layers.Dense(n_units) # <-- linear activation function The softmax function would be automatically applied on the output values by the loss function. Therefore, this does not make a difference with the scenario when you use from_logits=False (default) and a softmax activation function on last layer; however, in some cases, this might help with numerical stability during training of the model. You may also find this and this answers relevant and useful about the numerical stability when from_logits=True . | {
"source": [
"https://datascience.stackexchange.com/questions/73093",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/95887/"
]
} |
74,465 | I am trying to understand what it really means to calculate an ANOVA F value for feature selection for a binary classification problem. As I understand from the calculation of ANOVA from basic statistics, we should have at least 2 samples for which we can calculate the ANOVA value. So does this mean in the Sklearn implementation that these samples are taken from within each feature? What exactly do these samples represent in the case of feature selection for this problem? I tried to setup a very simple example which I have listed below, but I am still struggling to understand what the ANOVA value really means here? I'm also struggling to understand how to calculate this by hand, which usually helps me see what is happening on the inside. In this example, repayment status 0 means the loan is repaid and 1 means it defaulted. I have only supplied 5 rows of data to keep it simple. The code and results are as follows: | Intuition We have two classes and we want to find a score for each feature saying "how well this feature discriminates between two classes". Now look at the figure bellow. There are two classes red and blue and two features on $x$ and $y$ axes. $x$ feature is a better separator than $y$ because if we project data on $x$ axis we get two completely separated classes but if we project data onto $y$ , two classes have overlap in the middle of axis (comment if we need more clarification). What makes $x$ better than $y$ ? As you see in the figure above: According to $x$ , two classes are far from each other . Math Translation: The distance between means of class distributions on $x$ is more than $y$ . According to $x$ , the scatter of classes do not fall on each other but according to $y$ they do. It means that according to $x$ , classes are more compact so more probable to not have an overlap with another class. Math Translation: The variance of each single class according to $x$ is less than those of $y$ . Now we can easily say $\frac{distance\_between\_classes}{compactness\_of\_classes}$ is a good score! Higher this score is, better the feature discriminates between classes. Now we know, according to this definition, what $good$ and $bad$ features mean. Let's find a math formulation to quantize it. Mathematics (to do on the paper) Let's formulate our two criteria: Distance between means of class distributions is the numerator. Population is taken into account, I assume for statistical significance (needs a reference from a statistician!). A concept similar to sample variances of classes is the denominator. Here instead of dividing sum of squares by $(sample\_population -1)$ , we sum up all $(sample\_population -1)$ s and divide the final value by them. Now Back To Your Data To calculate the above you calculate sum of between-class distances and sum of within-class variations for each feature according to different classes. I do it for only one feature. Let's choose Loan . Class 1: [5000, 18000]
Class 2: [47500, 45600, 49500]
Mean of all points: (47500 + 45600 + 49500 + 5000 + 18000) / 5 = 33120
Mean 1: (5000 + 18000) / 2 = 11500
Mean 2: (47500 + 45600 + 49500) / 3 = 47533
Numerator: 2 x (11500 - 33120)^2 + 3 x (47533 - 33120)^2 = 1,558,052,507 For denominator we go with Sum of Squares Within class (it is simply the numerator in formulation of sample variance ): SSW 1: (5000 - 11500)^2 + (18000 - 11500)^2 = 84,500,000
SSW 2: (47500 - 47533)^2 + (45600 - 47533)^2 + (49500 - 47533)^2 = 7,606,667
Na = 2, Nb = 3 --> (Na - 1) + (Nb - 1) = 1 + 2 = 3
Denominator: (84,500,000 + 7,606,667)/3 = 30,702,222 Now the F-Score for feature Loan is: F-Score: 1,558,052,507 / 30,702,222 = 50.74 as you see with your calculation in Python. Note I tried to explain in a simple way. For example the denominator of sample variance is called degree of freedom but I skipped those terms for simplicity. Just understand the main idea. Further the means and smaller the within variances, better the feature is. You can formulate it yourself as well (however you will not have p-values anymore ;) ) Finding P-values and understanding what it means, is another story which I skipped. Hope it helped.
Good Luck! | {
"source": [
"https://datascience.stackexchange.com/questions/74465",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/97464/"
]
} |
76,824 | I originally came from R, but Python seems to be the more common language these days. Ideally, I would do all my coding in Python as the syntax is easier and I've had more real life experience using it - and switching back and forth is a pain. Out side of ML type stuff, all of the statistical analysis I've done have been in R - like regressions, time series, ANOVA, logistic regression etc. I have never really done that type of stuff in Python. However, I am trying to create a bunch of code templates for myself, and before I start, I would like to know if Python is deep enough to completely replace R as my language of choice. I eventually do plan on moving more towards ML, and I know Python can do that, and eventually I would imagine I have to go to a more base language like C++. Anyone know what are the limitations of Python when it comes to statistical analysis or has as link to the pros and cons of using R vs. Python as the main language for statistical analysis? | Python is more "general purpose" while R has a clear(er) focus on statistics. However, most (if not all) things you can do in R can be done in Python as well. The difference is that you need to use additional packages in Python for some things you can do in base R. Some examples: Data frames are base R while you need to use Pandas in Python. Linear models ( lm ) are base R while you need to use statsmodels or scikit in Python. There are important conceptional differences to be considered. For some rather basic mathematical operations you would need to use numpy . Overall this leads to some additional effort (and knowledge) needed to work fluently in Python. I personally often feel more comfortable working with base R since I feel like being "closer to the data" in (base) R. However, in other cases, e.g. when I use boosting or neural nets, Python seems to have an advantage over R. Many algorithms are developed in C++ (e.g. Keras , LightGBM ) and adapted to Python and (often later to) R. At least when you work with Windows, this often works better with Python. You can use things like Tensorflow/Keras, LightGBM, Catboost in R, but it sometimes can be daunting to get the additional package running in R (especially with GPU support). Many packages/methods are available for R and Python, such as GLMnet ( for R / for Python ). You can also see based on the Labs of " Introduction to Statistical Learning " - which are available for R and for Python as well - that there is not so much of a difference between the two languages in terms of what you can do. The difference is more like how things are done. Finally, since Python is more "general purpose" than R (at least in my view), there are interesting and funny things you can do with Python (beyond statistics) which you cannot do with R (at least it is harder). | {
"source": [
"https://datascience.stackexchange.com/questions/76824",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/87923/"
]
} |
3 | I see that whenever someone does DevOps, it's mostly about automating things like deployment etc. But where does automation end and DevOps begin? | A big part of DevOps is making it possible to release very often. That comes with automated build, automated testing, etc. You can say that to achieve its goals, DevOps need to use automation to be efficient. Here's how DevOps and automation are related. DevOps is not just automation, there's more to it. Conversely, automation is not exclusively used by "DevOps people". A lot of automation was taking place in IT before DevOps came around. Please don't consider to above diagram to represent all that is DevOps, or all this is automation. It is to help the reader picture how the two concepts are related. | {
"source": [
"https://devops.stackexchange.com/questions/3",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/20/"
]
} |
10 | I am working with Digital Ocean and Terraform and I already can automate the domain, subdomain, network preferences and the host but there is a section called User data that looks like this: The description of that field says Allows the use of Cloud-init to configure your droplet . Looking around I found the documentation . How to take advantage of this while using Terraform ? | Cloud-init files are essentially bootstrap codes, that run before each startup, and can - among others - modify files, set up services, create users, etc. Not all types of droplets support all functionalities of cloud-init, for example CoreOS uses it's own implementation, with a very limited subset of valid values. To use this in terraform, simply provide the cloud-init file during droplet creation: main.tf : resource "digitalocean_droplet" "web" {
image = "coreos-stable"
name = "web"
region = "lon1"
size = "2gb"
private_networking = true
ssh_keys = ["${digitalocean_ssh_key.dodemo.id}"]
user_data = "${file("web.conf")}"
} web.conf : #cloud-config
coreos:
units:
- name: "etcd2.service"
command: "start"
- name: "fleet.service"
command: "start" This will for example create a droplet, where CoreOS will run etcd2 and fleet during startup You can find some more examples in this repository , where I show how one can use these configuration options to set up some simple docker based services on CoreOS | {
"source": [
"https://devops.stackexchange.com/questions/10",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/15/"
]
} |
21 | I'd like to backup all Jenkins jobs and config files. What's the easiest way of doing it? | There are many ways to do this but the easiest way I can think is doing a backup of the Jenkins Home folder. You can see where is your Jenkins home with: echo $JENKINS_HOME And for example, if you only want to backup the jobs you can go to: cd $JENKINS_HOME/jobs And make a backup for that folder. All that configuration will be a bunch of XML files. If you are using the official Jenkins docker image , the home will be on: /var/jenkins_home | {
"source": [
"https://devops.stackexchange.com/questions/21",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/3/"
]
} |
61 | All the cloud providers are marketing their "serverless" solutions. The promise is that serverless is going to replace the way developers are currently develop their software, and operations manage it in production. What is "serverless"?
Where can one learn more about it, and how it can be used today? | Wikipedia's article on serverless computing provides a decent introduction to the topic: Serverless computing, also known as function as a service (FaaS), is a cloud computing code execution model in which the cloud provider fully manages starting and stopping of a function's container platform as a service (PaaS) as necessary to serve requests, and requests are billed by an abstract measure of the resources required to satisfy the request, rather than per virtual machine, per hour. The idea is that a developer shouldn't need to care about the server infrastructure at all . The cloud provider manages the physical servers, the operating system used and all the traditional difficulties involved with running a server. Serverless computing changes your architecture from thinking about what machines are doing to what functions are doing. AWS Lambda is the example that comes to mind - you pay for and run functions , without any mention about what type of physical infrastructure is running below. There are also competing serverless hosts such as Azure Functions (or you can simply search if you're not interested in either of those). There are quite a few advantages to serverless (although you do need to write in a slightly different way than you're used to in some cases, because it's a totally different architecture): Scalability essentially comes for free - because you're just paying to run a function, the cloud provider can easily dedicate more hardware as needed to run your code. You can also potentially scale as demand rises, rather than paying a fixed rate whether your application is used once or a million times. The server software and hardware no longer needs to be managed by a developer - the cloud provider handles that. If you've ever used something like Arch on a server, you'll know how easy it is to wipe out a critical package and break everything! It frees up developers to focus on what they're good at - code . Most developers probably won't be great at both server infrastructure and programming - serverless just takes one problem out of the equation. | {
"source": [
"https://devops.stackexchange.com/questions/61",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/6/"
]
} |
79 | I would like to use the Terraform MySQL Provider to keep a list of mysql users and grants handy for creating new test environments. The .tf and .tfstate files both seem to want to store the MySQL passwords in plaintext. Concerning .tf: It is my understanding that .tf files live in revision control and are maintained by a team. How does that practice differ when secrets are in the .tf ? It is possible to encrypt these values at all? Concerning .tfstate: I can store the .tfstate securely somewhere after running Terraform apply, but it would be preferable for this use case to not store it at all? | Terraform supports adding an additional file with variables during invocation. documentation: https://www.terraform.io/intro/getting-started/variables.html#from-a-file We are using that feature to provide a secrets.tfvars file on each invocation of Terraform. We also use a script to wrap the command so that its invocation is consistent, and all team members avoid having to make the same mistakes. The wrapper synchronizes .tfstate with S3 before an execution, and pushes .tfstate back to S3 at the end. I also hear of people doing the same thing with state stored in Consul, even adding a kind of semaphore in consul to prevent two people from starting Terraform at the same time. When you avoid setting a default value in a variables.tf file, it forces the user to input the value. It can be either entered manually or using the -var-file command option like described above. Not setting a default on your secrets is a good way to enforce changes that require a change in secrets. The secrets.tfvars file is a symbolic link to one of the files with secrets which are not stored in version control. We have several, one per environment, like so secrets-prod.tfvars , secrets-dev.tfvars , secrets-stg.tfvars , etc... An even better practice would be to generate these secrets files during the wrapper script based on data in Vault or some other way to share secrets. Since currently when the format of secrets changes, or secrets themselves, we need to communicate it to the team outside the version control channel - and this doesn't always work well, to be honest. But secrets do change infrequently. | {
"source": [
"https://devops.stackexchange.com/questions/79",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/36/"
]
} |
83 | What is a good strategy to keep my site online when S3 goes offline? If S3 US East 1 goes offline, how should I have my app configured/structured to prevent that taking my entire site offline? What are the best strategies to diversify in this sort of situation? | In March 2015, Amazon AWS announced they support S3 replication across regions. When a certain region in S3 goes offline, you can serve files from your mirror in another region. source: https://aws.amazon.com/blogs/aws/new-cross-region-replication-for-amazon-s3/ The practice of keeping your infrastructure online by doing a switch over to another region is a complex one, but S3 is a relatively small and simple component. Netflix has a great article on their experience with Chaos Gorilla. This also applies to service degradation, like increased latency. Not just when a service you depend upon is completely offline. Netflix has an article on this as well: Chaos Engineering Upgraded . | {
"source": [
"https://devops.stackexchange.com/questions/83",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/91/"
]
} |
86 | What is the difference between SRE and DevOps? Site Reliability Engineering and Development Operations seem to overlap a lot in detail. How do I know which group is responsible for what, and how do I know what jobs would be appropriate for my skillset? It seems like SRE is about maintaining servers and network, and DevOps is about maintaining code, is that correct? Isn't there still a fair amount of overlap between those two? | DevOps is about maintaining code, is that correct? DevOps is not "just" about code, or systems, or any one thing. DevOps is a very general term that covers all things related to software delivery. Site Reliability Engineering is a term popularized by Google. From this article https://landing.google.com/sre/interview/ben-treynor.html we can distill their TL;DR: Fundamentally, it’s what happens when you ask a software engineer to
design an operations function. Operations, Engineering, and Software Development are blurring together. The degree of automation required to create and maintain a mature infrastructure requires skills from all three. SREs are admins, and engineers, and developers. See also: http://shop.oreilly.com/product/0636920041528.do | {
"source": [
"https://devops.stackexchange.com/questions/86",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/91/"
]
} |
98 | Looking at trying to build some resilience into our Ansible setup which deals with provisioning and configuration. I understand a few methods of testing on the configuration side of things but I'm wondering how best to implement testing on the provisioning side of things, and if there are any tools which can help with this type of implementation. Currently a lot of our testing is done serially during the playbook which makes a lot of sense for stuff like "has service come up; is the vip available; has this async task finished" but what really concerns me is our ability to manage drift of configuration at both the application and provisioning layer (such as VM configuration). I'm aware Ansible isn't the best tool for working with configuration drift but I'm curious to see your own opinions. If you have something to fully automate the process even better. (we have a few ugly scripts which report back in slack daily). Note : Right now we have a few conditions where a reprovision might occur (e.g. rebuild from backup, critical systems issue) but typically it just loops through some of the ansible configuring tasks and thinks no more of it. | Some options out there.. Testing tools: Sorted by github stars Serverspec - Ruby, most popular tool out there, built on ruby's rspec Goss - YAML, simple, <10MB self-contained binary, extremely fast, can generate tests from system state Inspec - Ruby, think of it as an improved serverspec, almost same syntax, made by the chef guys. Built to be easier to extend than serverspec Testinfra - Python, has the cool feature of being able to use Ansible's inventory/vars Major differences between them: Ultimately, I would suggest spending a day experimenting with all of them to get a feel for them before deciding for yourself. With the exception of Goss, all the frameworks can run against a remote machine (ex. over ssh). Goss only runs locally or in docker w/ dgoss. All frameworks can be run locally on a server, but require Python or Ruby to be installed or embedded. Inspec provides a self-contained <150MB bundle with an embedded version of ruby. Goss is a single <10MB binary with no external dependencies. Goss has built in support for nagios/sensu output, this allows for easier integration with monitoring tools. Goss tests tend to be simpler, but less flexible since it's based on YAML. Other frameworks allow you to leverage the full power of the underlying language Python/Ruby to write tests or extend the tool's functionality. (simplicity vs flexibility) Goss allows you to generate tests from current system state Testinfra to my knowledge is the only one that has built-in support for ansible inventory and variables Inspec is backed by Chef Continuous/divergence testing: Chef Compliance - works with inspec to continuously test your servers, paid product Goss - Can be easily hooked into Nagios or Sensu. Also, supports exposing server tests as an http endpoint. Testing harnesses for development: kitchen - Testing harness tool, launches instance, runs config management code, runs test suite. Made by the chef guys Molecule - Similar to test kitchen, but written specifically for ansible Full Disclosure: I'm the author of goss UPDATE: InSpec 4.x or above uses a mixed commercial / open source license - see comments. | {
"source": [
"https://devops.stackexchange.com/questions/98",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/143/"
]
} |
133 | The practices describing DevOps, such as continuous delivery, automation, etc. are relevant to products that provide continuous service, such as SaaS products. For example, a software development company that mostly does projects for other clients might never be maintained these after the project is over. And client projects are not shared with other clients, because irrelevant. Does DevOps even apply to companies who develop multiple projects that are one-offs? What DevOps practices apply in this case, if at all? | Absolutely not! DevOps is all about breaking down the traditional silos (departments) in order to be more efficient. Better communication between teams, improved visibility and reliable and automated process are ways to achieve a better product. I used to work for a big media company where we would support an internal tool and develop public-facing websites. The benefits of DevOps in our case were the following: Through continuous building, we let know the development team earlier rather than later if there are integration or build problems with their code. They can fix issues while their mind is still on the piece of code they just committed. Through continuous testing and delivery (into QA), we enabled the QA team to find problems earlier and report them earlier. This reduced the time it took to find and correct bugs as well as reduce the complexity of these investigation. With out log collection & aggregation tools, we gave to the developers access to something they wouldn't usually look at (they were very keen on the debuggers :) - understanding how logs are seen and used by other teams improved the overall quality of logs We often shared information and created documentation to share knowledge between teams, trying to break down walls. By understanding the Ops' needs, we create a few guides for what should always be kept in mind when bootstrapping an application (where/how to manage properties, etc.). By understanding the Dev's reality (code more features, faster, gogogogo!) we were able to have the ops create servers and clusters that were better suited to the dev's needs. The overall quality of deployments was greatly improved too. Deployments were handled by our team, so we had perfect visibility on both Ops and Dev. This eliminated many issues related to the "code hand-off" where the dev would hand over a package and one-page document to the ops saying "Install this!". Overall, I would say that regardless if you are updating your production environment once per day or once per month and regardless of how many customers you have or your business model, every enterprise can use better communication, better tools, better visibility, faster feedback, etc. | {
"source": [
"https://devops.stackexchange.com/questions/133",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/6/"
]
} |
157 | When applying for a job, usually you can find two types of similar jobs: Sysadmin Engineer and DevOps Engineer . Both of them deal with server configuration and ensure the reliable operation of computer systems. It can be hard to tell the difference between the two. What are the main differences between them? | Mainly DevOps is not a role (when used as such it's more a buzzword than a real role). DevOps is roughly an organization pattern aiming at breaking the silo between developers and sysadmins. The main goal is to build teams with devs and sysadmins (along with testers usually) responsible for a product (application) from its definition, architecture decisions up to the maintenance in run of this product. Each member of the team will be part of the decision on the whole life-cycle of the product, a dev will do some sysadmin tasks in production, and a sysadmin will participate in the design phase of the product to avoid caveats from the infrastructure perspective for example. At ideal, a sysadmin would also be part of the development team for the product, in real world sysadmin code more on the configuration around the product and monitoring solutions, but being able to voice concerns to other members of the team avoid a lot of misunderstanding on the deployment process. | {
"source": [
"https://devops.stackexchange.com/questions/157",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/3/"
]
} |
213 | In my career, I have been both a software developer and ITIL practitioner in an operations role. Thus DevOps was a natural progression for me. However, I have always struggled with the highly specialised language that ITIL introduces and making that "Developer Friendly" enough to not be a complete turn off to developers. ITIL is an internationally recognised IT Service Management framework that has been developed over 30 years as a set of practices that have a proven benefit to the operational stability and maturity of an organization. Is DevOps truly compatible with ITIL, or in essence do we need to take the spirit of ITIL and "translate" it to language that is better understood by development teams: Incident & Problem Management → Production Defects, Bugs or Issues Change & Release Management → Continuous Delivery Event Management → Logging, Telemetry, Instrumentation and Alerting | In my opinion, the DevOps culture come along with a methodology change toward Agile process management. ITIL is heavily aimed at a clear formalism of the process and the results and thus more adapted to a Waterfall model. This doesn't mean ITIL is incompatible with Devops, but usually this will be two separate process with different timelines.
I mean that the inclusion of a new product within ITIL referential will usually be delayed until the product/application has been released in production for a while, where early pitfalls and some documentation needed to integrate ITIL has been done and adapted after the product is "live". One of the thing in ITIL is the Service Design, which is assumed to be defined before any development task, an agile process will/may review the design in each iteration, breaking the formalism needed in an ITIL process. The main goal of ITIL is, as you said, to provide a framework to ensure nothing is omitted between the design/conception and maintenance phase (Build/Run). In a devops culture, the whole team is responsible of all phases on long term, hence why the formalism is reducted. That doesn't mean we have to forget ITIL, the core principles are absolutely good and, in my opinion, should be used as a checklist to build the initial backlog of a product.
It's just that following the ITIL principle with all its formalism goes against the time to market reduction goal of a quick iterative software development and sometimes is not even applicable as there's less transmission of information needed between teams, as the tasks are done by the same team. | {
"source": [
"https://devops.stackexchange.com/questions/213",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/397/"
]
} |
302 | I'm still a student, but I'm not knowledgeable about operations, and my English is still bad. My question is: Why does development oppose operations ? When does developing oppose operations? | The point of DevOps, is that development shouldn't oppose operations, instead they should support each other. Traditionally, due to waterfall deployments and large scale updates, development would cause operations a variety of problems when deploying due to inadequate testing, changing server environments, the list goes on and on. Essentially, the updates were too large for the operations team to be able to effectively deploy them without some problems arising in the process. These problems might be why you believe that development opposes operations . On the other hand, DevOps works to reduce update size, decrease rigid environments, and generally improve the handoff of the application between development and operations by increasing the amount of times the handoff occurs each year. With the increased number of deployments comes less headaches for operations, because they have either automated a large amount of work required to update the products, or they better anticipate and prepare for the updates. Tldr: DevOps aims to nullify the theory that development opposes operations by creating a mindset where operations and development work together to frequently deploy products in a timely and easily reproducible way. | {
"source": [
"https://devops.stackexchange.com/questions/302",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/389/"
]
} |
342 | A team of IT sysadmins that have exprience using shell scripting to solve their problems, are contemplating to start using Ansible instead. Are there substantial differences and good reasons to start using Ansible vs. to continue writing shell scripts? | I never used Ansible but since a few weeks, I try to figure out what good Ansible could be in comparison with shell scrips–Which proves, at least in my case, that the haunting ad-campaigns they run are effective! After many unsuccessful attempts–which proves how their documentation fail at answering one of the most obvious question–I think I finally got it. My conclusion is that over shell scripting, Ansible essentially offers 1. The possibility of checking that a system agrees with a desired state, 2. the ability to integrate with Ansible Tower, which is a paying system that seems to include monitoring abilities. In some important cases, like when implementing the immutable server pattern, the point 1 is probably not very useful, so the list of advantages is rather thin. It seems to me that the benefits offered by Ansible over shell-scripting, as the documentation present them, could be sensible in a few handful of optimistic cases well covered by available modules but are small or even hypothetical in the general case. For a skilled shell-programmer, these benefits are most likely counter-balanced by other aspects of the trade-off. But my conclusion maybe only proves how bad the introduction material is at displaying the actual advantages of this software! Now, I propose to watch the introduction video and go randomly as a potential new user through the introduction material to Ansible an let's compare it to what a skilled shell programmer can produce in a reasonable time. The quick start video: There is a quick start video . It starts with a page claiming that… well these are not really claims, these are bullet lists, an artefact commonly used to suspend critical judgement in presentations (since the logic is not shown, it cannot be criticised!) 1. Ansible is simple: 1.1 Human readable automation – Specifications are technical documents, how could name: upgrade all packages
yum:
name: '*'
state: latest be easier to read than the corresponding yum invocation found in a shell-script? Furthermore, anybody who had contact to AppleScript dies laughing when they read “human readable automation.” 1.2 No special coding skills required – What is coding if not writing formal specifications? Ansible has conditionals, variables, so, how is it not coding? And why would I need something I cannot program, that would henceforth be inflexible? The statement is happily inaccurate! 1.3 Tasks executed in order – Well, maybe some codegolf aficionados are aware of languages that execute tasks in disorder, but executing tasks in order hardly looks exceptional. 1.4 Get productive quickly – Skilled shell programmers are productive now. This counter-argument is just as serious as the initial argument. 2. Ansible is powerful A popular salesman trick to sell artefacts is to fool people into believing they will acquire the “power” of these artefacts. The history of advertisement for cars or isotonic drinks should supply a convincing list of examples. Here Ansible can do “app deployment” – but shell script surely do, “configuration management” but this is a mere statement of the purpose of the tool, not a feature, and “workflow orchestration” which looks a bit pretentious but no example in this document goes beyond what GNU Parallel can do. 3. Ansible is agentless To populate the column, they wrote in three different manners that this only needs ssh, which, as everybody knows is a daemon and has nothing to do with these agents pervading the world of configuration management! The rest of the video The rest of the video introduces inventories, which are static lists of resources (like servers) and demonstrates how to deploy Apache on three servers simultaneously. This really does not match the way I work, where resources are highly dynamic and can be enumerated by command-line tooling provided by my cloud provider, and consumed by my shell functions using the pipe | operator. Also, I do not deploy Apache on three servers simultaneously, rather, I build a master instance image that I then use to start 3 instances which are exact replicas one of the other. So the “orchestrating” part of the argumentation does not look very pertinent. Random documentation step 1: Integration with EC2 EC2 is the computing service from Amazon, interacting with it is supported by some Ansible module . (Other popular cloud computing providers are also provided.): # demo_setup.yml
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
key_name: my_key
group: test
instance_type: t2.micro
image: "{{ ami_id }}"
wait: true
exact_count: 5
count_tag:
Name: Demo
instance_tags:
Name: Demo
register: ec2 The corresponding shell-script would be essentially identical with YAML replaced by JSON: provision_a_set_of_instances()
{
aws --output=text ec2 run-instances --image-id …
} or the JSON version provision_a_set_of_instances()
{
aws --output=text ec2 run-instances --cli-input-json "$(provision_a_set_of_instances__json)"
}
provision_a_set_of_instances__json()
{
cat <<EOF
{
"ImageId": …
}
EOF
} Both version are essentially identical, the bulk of the payload is the enumeration of the initialisation values in a YAML or JSON structures. Random documentation step 2: Continuous Delivery and Rolling Upgrades The largest part of this guide does not display any really interesting feature: it introduces variables (IIRC, shell scripts also have variables)!, and an Ansible module that handles mysql, so that if instead of searching after “how do I create a mysql user with privileges on X Y” and end with something like create_application_db_user()
{
mysql --host "${mysql_host}" --user "${mysql_user}" --password "${mysql_password}" "${mysql_table}" <<EOF
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%';
EOF
} you search after “how do I create a mysql user with privileges on X Y in ansible ” and end up with - name: Create Application DB User
mysql_user: name={{ dbuser }} password={{ upassword }}
priv=*.*:ALL host='%' state=present The difference is still probably not very meaningful. On that page we also discover that Ansible has a template meta-Programming language {% for host in groups['monitoring'] %}
-A INPUT -p tcp -s {{ hostvars[host].ansible_default_ipv4.address }} --dport 5666 -j ACCEPT
{% endfor %} When I see this, I happen to really be in my comfort zone. This kind of simple meta-programming for declarative languages is exactly the same theoretical paradigm as BSD Makefiles! Which I happen to have programmed extensively This excerpt shows us that the promise of working with YAML file is broken (so I cannot run my playbooks through a YAML parser, e.g. ). It also shows us that Ansible must discuss the subtle art of evaluation order: we have to decide if variables are expanded at the “declarative part” of the language or at the “imperative” meta-part of the language. Here shell programming is simpler, there is no meta-programming, aside from explicit eval or external-script sourcing. The hypothetical equivalent shell excerpt would be enumerate_group 'monitoring' | {
while read host; do
…
done
} whose complexity in comparison to the Ansible variant is probably tolerable: it just uses the plain, regular, boring constructs from the language. Random documentation step 3: Testing strategies Last, we meet what turns out to be the first actually interesting feature of Ansible: “Ansible resources are models of desired-state. As such, it should not be necessary to test that services are started, packages are installed, or other such things. Ansible is the system that will ensure these things are declaratively true. Instead, assert these things in your playbooks.” Now it starts to be a bit interesting, but: Aside from a handful of standard situations readily implemented by available modules, I will have to feed the bits implementing the test myself, which will quite probably involve some shell commands. Checking for the conformity of installations might not be very relevant in the context where the immutable server pattern is implemented: where all systems running are typically spawned from a master image (instance image or docker image for instance) and never updated – they are replaced by new instead. Unaddressed concern: the maintainability The introductory material from Ansible ignores the question of the maintainability. With essentially no type system, shell-scripting has the maintainability ease of JavaScript, Lisp or Python: extensive refactorings can only be achieved successfully with the help of an extensive automated testsuite – or at least designs that allows easy interactive testing. That said, while shell scripting is the lingua franca from system configuration and maintenance, nearly each programming language has an interface to the shell. It is therefore totally feasible to leverage the maintainability advantage of advanced languages, by using them to glue together the various the bits of shell-configuration bits. For OCaml, I wrote Rashell that essentially provides a hand of common interaction patterns for subprocesses, which makes the translation of configuration scripts to OCaml essentially trivial. On the side from Ansible, the very weak structure of playbooks and the presence of a meta-programming feature make the situation essentially as bad as it is for shell scripting, with the minus points that it is not obvious how to write unit tests for Ansible, and the argument of introducing ad-hoc a higher-level language cannot be mimiced. Idempotency of configuration steps The documentation of Ansible draws the attention on the necessity of writing idempotent configuration steps. More precisely, configuration steps should be written so that the step sequence a b a can be simplified to a b , i.e. we do not need to repeat configuration step. This is a stronger condition than idempotency. Since Ansible allows playbooks to use arbitrary shell commands, Ansible itself is unable to guarantee that this stronger condition is respected. This only relies on the programmer's discipline and the importance of this variation of idempotency when writing configuration scripts is certainly not a novelty. Post-Scriptum. Since this answer seems to enjoy a relative popularity, I fixed a few embarrassing syntax errors and typos. By a twist of life I also had to use Ansible two years in my work. Overall my experience confirms what I foreseen here and I hardly can think about a situation where shell scripts would have been really outperformed by Ansible. On some aspects, Ansible is just worse than shell scripting. At least the shell has functions, these functions can be mocked, it is possible to test part of all of them, so overall the shell has much better software engineering features than Ansible has. In a shell script it is also possible to process data and awk can express all what SQL can, which is very important when programming configuration – the information we are working with here is not intrinsically hierarchical, so there is a need for extracting an rewriting. Ansible is so bad at extracting and rewriting data! Treatments must be expressed with a mixture of YAMl-templating at the playbook step level and a dialect of Jinja at the dictionary member level… this is cumbersome, ugly, hard to write, hard to test and poorly documented (I regularly looked up the Jinja filter implementations!). | {
"source": [
"https://devops.stackexchange.com/questions/342",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/6/"
]
} |
433 | REPOSITORY TAG IMAGE ID CREATED SIZE
an-image 1 X 26 seconds ago 279 MB when the docker image will be run, the following message is shown: No java installations was detected.
Please go to http://www.java.com/getjava/ and download When Oracle JDK is deployed the docker image size is more than doubled! REPOSITORY TAG IMAGE ID CREATED SIZE
an-image 2 X 26 seconds ago 666 MB Discussion When "Is there no Oracle JDK for docker" is googled only some links to docker image that contain docker are returned. When Oracle JDK is deployed in the docker image the size is more than doubled. I want to keep the docker images as small as possible, but the Oracle JDK seems to be larger than the image itself! Question Is there no Oracle JDK for docker? | No. Because you cannot distribute Oracle JDK or JRE, the license in effect doesn't allow distribution . When distributed by a third party (embedded with your app) all the liability for it not working is on that party. This is why you will not find Oracle JDK/JRE on any of the public Docker registries, or in any Linux package repositories for that matter. You can create your own image, and install Oracle JDK or JRE using the webupd8team/java package for Ubuntu/Debian. And if you are using Alpine Linux this blog post explains which dependencies are required, and links to StackOverflow for installation instructions. Update : Oracle has an official Docker image on the Docker Store now. https://blogs.oracle.com/developers/official-docker-image-for-oracle-java-and-the-openjdk-roadmap-for-containers Link to Docker Store Oracle Java Image - https://store.docker.com/images/oracle-serverjre-8 | {
"source": [
"https://devops.stackexchange.com/questions/433",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/210/"
]
} |
447 | In many blog posts, and general opinion, there is a saying that goes "one process per container". Why does this rule exist?
Why not run ntp, nginx, uwsgi and more processes in a single container that needs to have all processes to work? blog posts mentioning this rule: "Single-process-per-container is a recommended design pattern for Docker applications." "Docker is only for creating single-process or single-service containers." "better to use one process per container" "Run a single service as a container" "One process per container" "one process per container" | Lets forget the high-level architectural and philosophical arguments for a moment. While there may be some edge cases where multiple functions in a single container may make sense, there are very practical reasons why you may want to consider following "one function per container" as a rule of thumb: Scaling containers horizontally is much easier if the container is isolated to a single function. Need another apache container? Spin one up somewhere else. However if my apache container also has my DB, cron and other pieces shoehorned in, this complicates things. Having a single function per container allows the container to be easily re-used for other projects or purposes. It also makes it more portable and predictable for devs to pull down a component from production to troubleshoot locally rather than an entire application environment. Patching/upgrades (both the OS and the application) can be done in a more isolated and controlled manner. Juggling multiple bits-and-bobs in your container not only makes for larger images, but also ties these components together. Why have to shut down application X and Y just to upgrade Z? Above also holds true for code deployments and rollbacks. Splitting functions out to multiple containers allows more flexibility from a security and isolation perspective. You may want (or require) services to be isolated on the network level -- either physically or within overlay networks -- to maintain a strong security posture or comply with things like PCI. Other more minor factors such as dealing with stdout/stderr and sending logs to the container log, keeping containers as ephemeral as possible etc. Note that I'm saying function, not process. That language is outdated. The official docker documentation has moved away from saying "one process" to instead recommending "one concern" per container. | {
"source": [
"https://devops.stackexchange.com/questions/447",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/6/"
]
} |
452 | What is an artifact repository? If it's just a place to store files, can't I just use a source control system? | During development you generate a fair amount of different artifacts. These might include: The source code The compiled application A deployable package Documentation and potentially others as well While you could use a source control system to store all of them, it's usually massively inefficient, as source control systems are usually designed to handle text based files, and not binary files. You might be able to use them as a simple storage mechanism, if most of your releases are text based, and you don't have to store a lot of binary data. Artifact repositories however are designed to store all kinds of files, including binary ones. This includes anything from zipped up source codes, to build results, to things like docker images as well. Also, they usually not only store these artifacts but also help manage them using various additional functions, for example: Versioning support: properly store some metadata, like when each artifact was built, what their version number is, store their hashes, etc. Retention: make sure you only keep the important artifacts, and automatically delete ones that are only snapshots / not needed anymore, etc. based on various criteria you can set up Access control: set up who can publish and who can download the various artifacts Promotion: ability to promote artifacts. For example you can have snapshot artifacts with a short retention period on a server near your coders, and a separate repository near the live servers, where only artifacts that have been deemed deployable appear. This also includes support for various version channels, and moving artifacts between them (like promoting a specific version from beta to stable). Act as a native repository for the artifacts. Meaning you can use it as the main repository for maven, rubygems, docker, etc. This can also include caching of artifacts from the official repositories as well. | {
"source": [
"https://devops.stackexchange.com/questions/452",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/72/"
]
} |
466 | The question about " What is an artifact repository? " contains an answer with an interesting explanation about the repository part of it.
And from reading the entire answer, I am not sure what exactly an " artifact " means in the context of DevOps. Any suggestions? Ps: From one of the answers I seem to understand that maybe artefact is what I'm wondering (confused?) about ... | Wikipedia has a very good answer to this question. Artifact , sometimes also called Derived Object , is a product of some process applied to the Code Repository . Originally they were called Build Artifacts , but as more processes were applied other than build to create them, the first word was simply dropped. The major distinction is that artifacts can be recreated from the code repository using the same process, providing you have preserved the environment in which the process was applied. As this process can be time consuming and the environment can be preserved imperfectly to be able to recreate the artifacts in the exact same way, we started to store them in Artifact Repositories . Storing them apart from Code Repository in an Artifact Repository is a design decision a DevOps engineer would make. Some companies, namely Perforce , suggest to use their Code Repository as Artifact Repository as well. There are different requirements in terms of access , auditing , object sizes , object tagging and scalability on each repository and so depending on situation it is often better to use two distinct products. For example Git repositories are copied in their entirety to every development machine and so storing artifacts in the code repository would increase its size beyond all reason, although lately there are ways to mitigate this. Another decision to make is which artifacts to store. Some companies store even intermediate artifacts as individual object files, to speed re-builds, others store simply just the final binaries. Not all artifacts have the same value. Artifacts resulting from a release build could have different requirements than artifacts resulting from a developer build. Most common artifacts are results of the following processes: Configuration , Preprocessing , Compilation , Linking , Automated Testing , Archiving , Packaging , Media files creation and processing , Data File Generation , Documentation Parsing , Code analyzing , QA , etc. | {
"source": [
"https://devops.stackexchange.com/questions/466",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/40/"
]
} |
488 | Here is a quote from the current content of continuous-integration : ... process of merging developer's working code copies to a shared codebase frequently to prevent or minimize integration problems. OK, I get that. But then there is also continuous-delivery and continuous-deployment , and that's where I continuously get a bit lost: How does continuous integration relate to continuous delivery and/or continuous deployment , assuming that somewhere along the line(s) via integration you end up delivering in a target environment where everything will be deployed . What's the difference between continuous delivery and continuous deployment ? Back in the days, before DevOps was called DevOps, we used terminology which might possibly help to understand these new DevOps terms, such as: promote to (or demote from) some pre-prod target, optionally combined with some type of regeneration process (compiles, binds, etc) to package all related components together in executable-like things. That's what should be similar/close to continuous integration , or not? distribute to some target environment, using something like FTP (if standard copies cannot bridge the gap), but do not yet activate it in the target. That's what should be similar/close to continuous delivery , or not? install (or activate ) in some target environment, combined with things like binds, stop/start operations, etc. That's what should be similar/close to continuous deployment , or not? | Continuous delivery and continuous deployment both take continuous integration one step further, by adding a 'deployment to production' step to the process.
The difference between continuous delivery and deployment is that for delivery this step is done manually and for deployment is it automatic. Difference between Continuous Integration, Continuous Delivery and Continuous Deployment. Picture copied from codeproject.com Whether you do continuous delivery or continuous deployment is very much an implementation choice. If you do continuous deployment, changes in code will be deployed automatically after the acceptance tests are passed. This may or may not be desirable for your product. With continuous delivery, people can make a choice whether a particular code change is deployed or not (and possibly where exactly it is deployed). Since the difference between continuous delivery and deployment is small and many people are unaware of the exact difference, the two terms are sometimes used interchangeably. | {
"source": [
"https://devops.stackexchange.com/questions/488",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/40/"
]
} |
537 | Defining a boolean in a docker-compose.yml file: environment:
SOME_VAR: true and running docker up results in: contains true, which is an invalid type, it should be a string, number, or a null Attempts to solve the issue If true is changed to True the issue persists. Using 'true' is not accepted by the code itself ( a play framework app is started using the ./target/universal/stage/bin/APPNAME -Dplay.evolutions.db.default.autoApply= , i.e. either -Dplay.evolutions.db.default.autoApply=true or -Dplay.evolutions.db.default.autoApply=false parameter): VAR has type STRING rather than BOOLEAN Using yes or no as a variable results in: contains true, which is an invalid type, it should be a string, number, or a null Using yes and using a script that transforms yes to true works Discussion According the docs Any boolean values; true, false, yes no, need to be enclosed in quotes to ensure they are not converted to True or False by the YML parser : Environment Add environment variables. You can use either an array or a
dictionary. Any boolean values; true, false, yes no, need to be
enclosed in quotes to ensure they are not converted to True or False
by the YML parser. Environment variables with only a key are resolved to their values on
the machine Compose is running on, which can be helpful for secret or
host-specific values. environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:
environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET Question Why is it not allowed? | This come from a design choice of YAML language about booleans Every unquoted value matching this "regex": y|Y|yes|Yes|YES|n|N|no|No|NO
|true|True|TRUE|false|False|FALSE
|on|On|ON|off|Off|OFF Will be converted to True or False . Note that from YAML 1.2 and onwards it appears that only true and false will be interpreted as boolean values. This start causing a problem when your code will test an environment value to be yes or no for example taking this script (other examples in the PR discution ): if [ "$SOME_VAR" == "yes" ];
then
echo "Variable SOME_VAR is activated"
else
echo "Variable SOME_VAR is NOT activated"
fi And setting in your compose file environment:
SOME_VAR: yes Will result in SOME_VAR being True when the script run, hence taking the wrong case as it is not equal to yes . So the choice has been made to disallow boolean to prevent unwanted behaviors hard to debug when you're not aware of the YAML rule. I see two way to get over the problem: Using an env_file instead, they are not parsed IIRC and should prevent the conversion. As you already said, use a wrapper script around your launcher to define the value instead before launching the app, something along the line of this should do: AUTOAPPLY=false
if [ "$SOME_VAR" == "true" ]
then
AUTOAPPLY=true
fi
./target/universal/stage/bin/APPNAME -Dplay.evolutions.db.default.autoApply=$AUTOAPPLY | {
"source": [
"https://devops.stackexchange.com/questions/537",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/210/"
]
} |
609 | I had a perfect server, it was so pretty and rock solid and so I named it Petra. It was perfect in every way, everything was configured and tuned just right, it had perfect 100% service record and 753 days of uptime. I've spent a lot of time and effort making sure it run so well. No other server in the company had been this good. But last night this evil monster crashed my server for no reason. Of course I was notified at 2am and it took me until morning to get it up and running and everything configured and tuned up, but I'm afraid it is not going to be as good as before. It might take weeks before it is back to it's former glory. Now my uptime is gone, I don't have even measly three 9s and who knows what this will do to my reputation. Who is this Chaos Monkey and why did he do that to my server and why is he trying to ruin me? | TL;DR : Chaos Monkey was developed in 2010 at Netflix and released into wild in 2012 is part of the Simian Army , wildly popular among devoted followers . Built on principles of chaos engineering, the army increases resiliency to failure by injecting constant failure to the system. Concept Chaos Monkey was developed specifically for AWS where it will randomly kill instances within an Auto Scaling Group. It is meant to run during the business hours when engineers are alert and can quickly react to discovered failures. Simian Army Members of the army would sow chaos through other means: Latency Monkey will introduce random delays to services. Chaos Gorilla (Kong) will simulate outage of entire availability zone. Other Monkeys are helpful and remove the weak members of the herd: Conformity Monkey shuts down instances not following best practices. Security Monkey looks for known security vulnerabilities in configuration and services. Doctor Monkey shuts down unhealthy instances not conforming to certain metrics. Janitor Monkey looks for unused resources to reclaim. Failure is Inevitable Failure in the System is inevitable, something will always go wrong . You might not be able to choose what, but you can try to choose when. By introducing small errors throughout the day, you ensure that your engineers are present. By killing non-conforming services quickly, you ensure that failures happen often before deployment. By making the environment more adversarial, you ensure that it will be the developers who run into issues long before any service makes its way into production. Failures will be quickly apparent in integration phase of new services with the old ones, but that is ok, because the old production services are already resilient. Cattle not Pets Everyone will tell you lately: Do not treat your servers as pets . There is a power in numbers and any single point of failure will bring down the system. No matter how well you can tune and optimize your server, no matter how beefy hardware you can get, how much it can handle, it will never be a match for herd of small scalable instances. Chaos Monkey encourages you to think about removing all points of failure, because sooner or later, the Monkey will come! Everyone fails and even the Amazon S3 had an unpredictable outage . Anti-Fragile So what is the theory and why does it work? Nassim Nicholas Taleb in his book Antifragile describes a concept where living self-aware systems, will benefit from a small levels of randomness and actually become better in face of adversity. This is similar to annealing. He does also describe an evolutionary way, where fragility of parts in a system is transferring into antifragility of the whole . The transfer occurs on two levels: By a small random variations - developers making changes - the most fit for the environment will survive and propagate - pass tests and get deployed . Standard Development Life Cycle . By failure of parts not capable to withstand a larger level of randomness in the environment, the remaining parts that were able to withstand it compose a system that is as a whole better able to deal with changing environment than before. This is essentially Chaos Monkey . Larger levels of randomness can be withstood using the second approach. | {
"source": [
"https://devops.stackexchange.com/questions/609",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/228/"
]
} |
650 | I have integrated Jenkins with Bitbucket using the Bitbucket Plugin . As per the plugin's Wiki, a given job will be triggered if the repository is set in the SCM of the job. As you know, if one set SCM in a Jenkins job, this is cloned in pre-build stage. So far so good. However, the main purpose of the job I'm setting has nothing to do with the repository's content; instead, I just want the job to process the payload sent by Bitbucket. One could say it's not a big deal in terms of storage to clone a repository despite you really don't need it. I don't think so, adding unnecessary steps, consuming time and resources is not a good practice. So, the question is: Does anyone know how to set an SCM in a Jenkins job but prevent it to clone the repository? | Yes, definitely. I do this all the time. You can specify configuration options for your pipeline and one of them is skipDefaultCheckout , which causes pipeline to skip the default "Declarative: Checkout SCM" stage. The skipDefaultCheckout option is documented in Pipeline Syntax and here's an example Jenkinsfile showing how to use it: pipeline {
agent { label 'docker' }
options {
skipDefaultCheckout true
}
stages {
stage('commit_stage') {
steps {
echo 'sweet stuff here'
}
}
}
} | {
"source": [
"https://devops.stackexchange.com/questions/650",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/239/"
]
} |
653 | The term "treat your servers like cattle not pets" has proliferated in recent years, particularly when applied to Docker containers and Virtual Machines What does it actually mean? | Randy Bias chronicles the history of the term stating that it probably originated in 2011 or 2012 when Bill Baker used the analogy when describing "scale-up" vs. "scale-out" architectural strategies. Bias adopted this into his presentations about cloud architectural patterns: In the old way of doing things, we treat our servers like pets, for example Bob the mail server. If Bob goes down, it’s all hands on deck. The CEO can’t get his email and it’s the end of the world. In the new way, servers are numbered, like cattle in a herd. For example, www001 to www100. When one server goes down, it’s taken out back, shot, and replaced on the line. Bias continues to define Pets as Servers or server pairs that are treated as indispensable or unique systems that can never be down. Typically they are manually built, managed, and “hand fed”. Examples include mainframes, solitary servers, HA loadbalancers/firewalls (active/active or active/passive), database systems designed as master/slave (active/passive), and so on. and cattle as Arrays of more than two servers, that are built using automated tools, and are designed for failure, where no one, two, or even three servers are irreplaceable. Typically, during failure events no human intervention is required as the array exhibits attributes of “routing around failures” by restarting failed servers or replicating data through strategies like triple replication or erasure coding. Examples include web server arrays, multi-master datastores such as Cassandra clusters, multiple racks of gear put together in clusters, and just about anything that is load-balanced and multi-master. Fundamentally, what Bias and Baker are trying to convey is there has to be a transition from how we treat servers from being "Unique Snowflakes" with names and emotional attachments, to a model whereby if we have a problem with the server we create a replacement and destroy the problematic server. Finally, it is probably worth mentioning that in regulated environments taking a server out the back and shooting it may not be optimal. In these cases it is often advantageous to "freeze" the server, for example using docker pause to freeze a container. This can then be used to perform a Root Cause Analysis as part of the Incident or Problem Management Process. | {
"source": [
"https://devops.stackexchange.com/questions/653",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/397/"
]
} |
711 | Cloud services hosted by Amazon Web Services , Azure , Google and most others publish the S ervice L evel A greement , or SLA, for the individual services they provide. Architects, Platform Engineers and Developers are then responsible for putting these together to create an architecture that provides the hosting for an application. Taken in isolation, these services usually provide something in the range of three to four nine's of availability: Azure Traffic Manager: 99.99% or 'four nines'. SQL Azure: 99.99% or 'four nines'. Azure App Service: 99.95% or 'three nine five'. However when combined together in architectures there is the possibility that any one component could suffer an outage resulting in an overall availability that is not equal to the the component services. Serial Compound Availability In this example there are three possible failure modes: SQL Azure is down App Service is down Both are down Therefore the overall availability of this "system" must lower than 99.95%. My rationale for thinking this is if the SLA for both services was: The service will be available 23 hours out of 24 Then: The App Service could be out between 0100 and 0200 The Database out between 0500 and 0600 Both component parts are within their SLA but the total system was unavailable for 2 hours out of 24. Serial and Parallel Availability In this architecture there are a large number of failure modes however principally: SQL Server in RegionA is down SQL Server in RegionB is down App Service in RegionA is down App Service in RegionB is down Traffic Manager is down Combinations of Above Because Traffic Manager is a circuit breaker it is capable of detecting an outage in either region and routing traffic to the working region, however there is still a single point of failure in the form of Traffic Manager so the total availability of the "system" cannot be higher than 99.99%. How can the compound availability of the two systems above be calculated and documented for the business, potentially requiring rearchitecting if the business desires a higher service level than the architecture is capable of providing? If you want to annotate the diagrams, I have built them in Lucid Chart and created a multi-use link, bear in mind that anyone can edit this so you might want to create a copy of the pages to annotate. | After reading Tensibai's excellent answer , I realised I used to be able to calculate this for network analysis purposes. I dug out my copy of High Availability Network Fundamentals by Chris Oggerino and had a crack at working this out from, not quite first principals. Taking my serial example directly out of Tensibai's answer is simply a case of multiplying the probability of each component being available by the other: So 99.95% * 99.95% = 99.9% Calculating it in parallel is a little more complicated as we do need to consider what the percentage un availability will be: The calculation is done as follows: Multiply the un availability of the two regions together. 0.1% * 0.1% = 0.0001% Convert that back to availability 100% - 0.0001% = 99.9999% Multiply the Traffic Manager availability by the availability of the two regions. 99.99% * 99.9999% = 99.9899% The result is the whole system availability. 99.9899% is close to 99.99% I ended up using Excel to perform the calculations, here is the values: ... and the formulas ... | {
"source": [
"https://devops.stackexchange.com/questions/711",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/397/"
]
} |
797 | The idea of having a DevOps Engineer has become quite popular recently , and it seems appealing to just have a person who can slot in and provide many of the benefits of DevOps, as described in the Puppet blog : Organizations using DevOps practices are overwhelmingly high-functioning: They deploy code up to 30 times more frequently than their competitors, and 50 percent fewer of their deployments fail, according to our 2015 State of DevOps report. However, I've noticed a lot of vocal opposition to the idea of a DevOps Engineer to try and make these improvements: Even with broad agreement about core DevOps attributes, controversy surrounds the term “DevOps engineer.” Some say the term itself contradicts DevOps values. Jez Humble, the co-author of Continuous Delivery, points out that just calling someone a DevOps engineer can create a third silo in addition to dev and ops — "...clearly a poor (and ironic) way to try and solve these problems." Why might it not be such a great idea for a business to hire a DevOps Engineer to try and 'implement DevOps', as opposed to the organisational change advocated by blogs like this ? Will the benefits be negated by just having an isolated DevOps role? | TL;DR : You should never try to hire a DevOps Team There are essentially three most common roles to hire for: DevOps Architect / Evangelist DevOps Engineer CI/CD Engineer These roles are distinct from your 6 essential software development roles that traditionally compose the software engineering organization: Product Management Software Development Tools Development Security and Compliance Quality and Testing System Operations (SRE) Lets go through the three roles one by one and see how they fit DevOps Architect or Evangelist Why : If you are lost, slow, broken and don't know what to do. When : At the start of the process in planing stages. What : Management level role to guide all managers and leads in the entire Software Engineering org. This person will plan the entire transformation of your engineering organization to a highly functioning state. Who : Consulting member well versed in theory, management practices, culture topics and operations who reports directly to VP of Software Engineering. In some cases and for smaller and mid-sized companies you might start the process instead with hiring a consulting organization, like DORA. DevOps Engineer Why : To bridge the gaps between teams if they are organized along the functional roles mentioned above to ensure cross functional level cooperation. To embed with product oriented teams, which have each of the 6 traditional roles included in the team, to help bridge the knowledge gaps and to help with implementation and adoption of the novel practices and tools. When : Once you've laid out your plans and the organizational transformation starts and the entire management team is on board. What : Enable cross function cooperation, ensure that team boundaries are broken down, that local optimizations inside teams are not creating a barrier to high throughput of work throughout the value chains all the way from customer wishes to customer deliveries. Who : Experienced engineer with skills both in software development and system operations. He should be skilled in the best practices, process and culture changes related to DevOps transformation. CI/CD Engineer Why : To help implement CI/CD pipelines, integrate your tool chain, bring in the tools that will enable better working of the company. When : During the transition in larger organization, while the above roles have been already filled. What : Engineer, which is essentially part of the tools team that will be able to setup CI/CD pipelines and start integrating internal systems in a way that will remove friction from the throughput of work. Who : Engineer experienced with Tools, Integration process, Release Management and DevOps practices. Someone who understands they are replacing human gating in release process by Automation. | {
"source": [
"https://devops.stackexchange.com/questions/797",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/14/"
]
} |
798 | We capture our applications release by using commits from source code management, new package into the artifactory system etc., How can we capture my tools release from vendor so that I will trigger my upgrade script to upgrade my tools itself, for ex: upgrading Jenkins whenever new release available from Jenkins itself? List of tools am expecting to upgrade: 1. Jenkins
2. GitHub Enterprise
3. Atlassian Jira
4. Atlassian Bamboo
5. Atlassian BitBucket Server Basically, continuous deployment(with proper pipeline including testing) to production without human interaction for these tools. Note: I am looking for some example, my requirement is for more than 10 tools. So for Jenkins, it will be Linux and Ubuntu. | TL;DR : You should never try to hire a DevOps Team There are essentially three most common roles to hire for: DevOps Architect / Evangelist DevOps Engineer CI/CD Engineer These roles are distinct from your 6 essential software development roles that traditionally compose the software engineering organization: Product Management Software Development Tools Development Security and Compliance Quality and Testing System Operations (SRE) Lets go through the three roles one by one and see how they fit DevOps Architect or Evangelist Why : If you are lost, slow, broken and don't know what to do. When : At the start of the process in planing stages. What : Management level role to guide all managers and leads in the entire Software Engineering org. This person will plan the entire transformation of your engineering organization to a highly functioning state. Who : Consulting member well versed in theory, management practices, culture topics and operations who reports directly to VP of Software Engineering. In some cases and for smaller and mid-sized companies you might start the process instead with hiring a consulting organization, like DORA. DevOps Engineer Why : To bridge the gaps between teams if they are organized along the functional roles mentioned above to ensure cross functional level cooperation. To embed with product oriented teams, which have each of the 6 traditional roles included in the team, to help bridge the knowledge gaps and to help with implementation and adoption of the novel practices and tools. When : Once you've laid out your plans and the organizational transformation starts and the entire management team is on board. What : Enable cross function cooperation, ensure that team boundaries are broken down, that local optimizations inside teams are not creating a barrier to high throughput of work throughout the value chains all the way from customer wishes to customer deliveries. Who : Experienced engineer with skills both in software development and system operations. He should be skilled in the best practices, process and culture changes related to DevOps transformation. CI/CD Engineer Why : To help implement CI/CD pipelines, integrate your tool chain, bring in the tools that will enable better working of the company. When : During the transition in larger organization, while the above roles have been already filled. What : Engineer, which is essentially part of the tools team that will be able to setup CI/CD pipelines and start integrating internal systems in a way that will remove friction from the throughput of work. Who : Engineer experienced with Tools, Integration process, Release Management and DevOps practices. Someone who understands they are replacing human gating in release process by Automation. | {
"source": [
"https://devops.stackexchange.com/questions/798",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/1173/"
]
} |
863 | If you had a Terraform configuration that had a moderate degree of complexity, how would you write tests around the configuration that could be executed as part of a Continuous Integration / Continuous Delivery pipeline? As an example, you may have a multi-cloud configuration that specifies the following desired state: Azure Container Services to host Docker in Azure Azure Blob Storage SQL Azure EC2 Container Service to host Docker in AWS Amazon S3 Storage Service Amazon RDS SQL Server database Potentially terraform apply could create the above from scratch, or transition from a partially deployed state to the above desired state. I am aware that Terraform splits its work into the execution plan stage and the application phase which actually makes changes to the target architecture. Can this be used to write tests against the execution plan, if so are there frameworks to help write these? | There is currently no full solution to this integrated into Terraform, but there are some building blocks that could be useful to assist in writing tests in a separate programming language. Terraform produces state files in JSON format that can, in principle, be used by external programs to extract certain data about what Terraform created. While this format is not yet considered officially stable, in practice it changes infrequently enough that people have successfully integrated with it, accepting that they might need to make adjustments as they upgrade Terraform. What strategy is appropriate here will depend a lot on what exactly you want to test. For example: In an environment that's spinning up virtual servers, tools like Serverspec can be used to run tests from the perspective of these servers. This can either be run separately from Terraform using some out-of-band process, or as part of the Terraform apply using the remote-exec provisioner . This allows verification of questions like "can the server reach the database?", but is not suitable for questions such as "is the instance's security group restrictive enough?", since robustly checking that requires accessing data from outside of the instance itself. It's possible to write tests using an existing test framework (such as RSpec for Ruby, unittest for Python, etc) which gather relevant resource ids or addresses from the Terraform state file and then use the relevant platform's SDK to retrieve data about the resources and assert that they are set up as expected. This is a more general form of the previous idea, running the tests from the perspective of a host outside of the infrastructure under test, and can thus collect a broader set of data to make assertions on. For more modest needs, one can choose to trust that the Terraform state is an accurate representation of reality (a valid assumption in many cases) and simply assert directly on that. This is most appropriate for simple "lint-like" cases, such as verifying that the correct resource tagging scheme is being followed for cost-allocation purposes. There is some more discussion about this in a relevant Terraform Github issue . In the latest versions of Terraform it is strongly recommended to use a remote backend for any non-toy application, but that means that the state data is not directly available on local disk. However, a snapshot of it can be retrieved from the remote backend using the terraform state pull command, which prints the JSON-formatted state data to stdout so it can be captured and parsed by a calling program. | {
"source": [
"https://devops.stackexchange.com/questions/863",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/397/"
]
} |
864 | I've seen some custom settings files used in combination with TFS build, but nothing native. Does Team Foundation Server or Visual Studio Team Services have a Jenkinsfile-like, declarative method for defining a build process? | There is currently no full solution to this integrated into Terraform, but there are some building blocks that could be useful to assist in writing tests in a separate programming language. Terraform produces state files in JSON format that can, in principle, be used by external programs to extract certain data about what Terraform created. While this format is not yet considered officially stable, in practice it changes infrequently enough that people have successfully integrated with it, accepting that they might need to make adjustments as they upgrade Terraform. What strategy is appropriate here will depend a lot on what exactly you want to test. For example: In an environment that's spinning up virtual servers, tools like Serverspec can be used to run tests from the perspective of these servers. This can either be run separately from Terraform using some out-of-band process, or as part of the Terraform apply using the remote-exec provisioner . This allows verification of questions like "can the server reach the database?", but is not suitable for questions such as "is the instance's security group restrictive enough?", since robustly checking that requires accessing data from outside of the instance itself. It's possible to write tests using an existing test framework (such as RSpec for Ruby, unittest for Python, etc) which gather relevant resource ids or addresses from the Terraform state file and then use the relevant platform's SDK to retrieve data about the resources and assert that they are set up as expected. This is a more general form of the previous idea, running the tests from the perspective of a host outside of the infrastructure under test, and can thus collect a broader set of data to make assertions on. For more modest needs, one can choose to trust that the Terraform state is an accurate representation of reality (a valid assumption in many cases) and simply assert directly on that. This is most appropriate for simple "lint-like" cases, such as verifying that the correct resource tagging scheme is being followed for cost-allocation purposes. There is some more discussion about this in a relevant Terraform Github issue . In the latest versions of Terraform it is strongly recommended to use a remote backend for any non-toy application, but that means that the state data is not directly available on local disk. However, a snapshot of it can be retrieved from the remote backend using the terraform state pull command, which prints the JSON-formatted state data to stdout so it can be captured and parsed by a calling program. | {
"source": [
"https://devops.stackexchange.com/questions/864",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/72/"
]
} |
885 | I have a job that will create files, unless one of the values being fed to it matches an older value. What's the cleanest way in Jenkins to abort or exit the job, without it being FAILED ? It exiting is the correct behavior so I want the build marked SUCCESS . It'll end up in an if statement like so; stage ('Check value') {
if( $VALUE1 == $VALUE2 ) {
//if they do match exit as a success, else continue with the rest of the job
}
} I don't want to throw an error code unless that can somehow translate into it being marked a successful build. | Please note: The following works for scripted pipelines only. For a solution for declarative pipelines see @kgriffs' answer . --- Figured it out. Outside of any stages (otherwise this will just end the particular stage as a success) do the following; if( $VALUE1 == $VALUE2 ) {
currentBuild.result = 'SUCCESS'
return
} return will stop the stage or node you're running on which is why running it outside of a stage is important, while setting the currentBuild.result prevents it from failing. | {
"source": [
"https://devops.stackexchange.com/questions/885",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/736/"
]
} |
893 | I was trying to provision spot instances via Ansible yesterday, and almost all my requests failed, even when I put my spot price == the on-demand price of that instance. So, when I had a look at the spot pricing graph, I found something very interesting: The spot price of the instance in us-east-1a is more than the on-demand price, which confused me. [in fact, ~5x times higher] Aren't spot instances preferred for the low cost? If yes, then why is the price higher than the on-demand price? According to AWS's docs : Spot instances provide you with access to unused Amazon EC2 capacity
at steep discounts relative to On-Demand prices. Also, does this mean that people are bidding over the on-demand price? If yes, then why so? Aren't they better off with an on-demand instance? Or did I understand the concept of spot instances wrong? | This is actually a great example of people slightly abusing spot. People are saying 'Our workload is really important but we don't want to pay full on demand price', so they set a bid price higher than on-demand on the assumption that it is very unlikely to be terminated, but still want to get the 'cheapest possible' spot price on offer. There have been cases where people enter, for example, $1000 (I've been told of at least one time this happened) because they want the benefit of the spot market. Of course naturally at some point the demand comes in and the spot price SOARS to make people pay higher than on-demand. The way the spot market works is that Amazon have X instances spare capacity, and they count from the top down until they fill the need for all X instances. The 'price', then, is the lowest price at which they can fulfill those X instances. So imagine Amazon have 10,000 instances - well they will count down to (say) $0.43 until they've got those 10,000 instances fulfilled. But if that supply suddenly drops to 100 instances, then maybe a few people put bid prices of $10,000 for their 100 instances, suddenly they'll be paying that $10k per hour. Tl;dr Understand how spot works, and set a cap you are prepared to pay. | {
"source": [
"https://devops.stackexchange.com/questions/893",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/46/"
]
} |
1,139 | I'm using Packer to create an AWS AMI based on an Ubuntu 16.04 image. In the beginning, I'm doing an upgrade: sudo apt-get update
sudo apt-get upgrade -y Here is the relevant part of my provisioners section: "provisioners": [
{
"type": "shell",
"inline": [
"sudo apt-get update",
"sudo apt-get upgrade -y"
]
}
] This breaks the automatization, however, as an interactive dialog pops up: amazon-ebs: Found kernel: /boot/vmlinuz-4.4.0-72-generic
amazon-ebs: A new version of /boot/grub/menu.lst is available, but the version installed
amazon-ebs: currently has been locally modified.
amazon-ebs:
amazon-ebs: 1. install the package maintainer's version
amazon-ebs: 2. keep the local version currently installed
amazon-ebs: 3. show the differences between the versions
amazon-ebs: 4. show a side-by-side difference between the versions
amazon-ebs: 5. show a 3-way difference between available versions
amazon-ebs: 6. do a 3-way merge between available versions (experimental)
amazon-ebs: 7. start a new shell to examine the situation I also tried to set export DEBIAN_FRONTEND=noninteractive before (as recommended in this answer ). Unfortunately, it makes no difference. Questions: Is there a way to get past the iteractive dialog (selecting option 1 would be fine)? Is it instead better to avoid upgrades and instead trust that the AMIs are up to date and contain the critical security patches? Background: This is the relevant part of my "builders" section, where I configured it to use the latest available AMI: "builders": [{
"type": "amazon-ebs",
"region": "eu-central-1",
...
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "*ubuntu-xenial-16.04-amd64-server-*",
"root-device-type": "ebs"
},
"owners": ["099720109477"],
"most_recent": true
},
...
}] Note : Turns out that the noniteractive mode works if you run apt-get update with both the -y and the -q flag. | This sequence of commands works for me: apt-get update
DEBIAN_FRONTEND=noninteractive apt-get upgrade -yq So, DEBIAN_FRONTEND=noninteractive is correct but you also need the -q flag. Source: https://github.com/moby/moby/issues/4032 | {
"source": [
"https://devops.stackexchange.com/questions/1139",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/2708/"
]
} |
1,201 | A few of the GitHub projects I contribute to use Travis CI for build testing. However, I've noticed that some projects use travis-ci.org for build testing, while others use travis-ci.com . Both sites seem to function identically, even down to using the same UI. What's the difference between travis-ci.org and travis-ci.com ? Why do some projects use one over the other? | Travis CI originally created two separated platforms to differentiate between private repos / paid ( travis-ci.com ) and public repos / free ( travis-ci.org ). However, as of May 2018, new users and projects (both private and public) should only use travis-ci.com (see this blog post ). Note that travis-ci.org will be closed down completely on December 31st, 2020 (see this newsletter ). Although still in beta—which is a bit weird since travis-ci.org will be shut down soon—Travis CI provides a well-documented migration guide . | {
"source": [
"https://devops.stackexchange.com/questions/1201",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/2449/"
]
} |
1,293 | I'm having a discussion with a friend about use cases for Docker . One guy in the team wants to use Docker for everything - like a kind of universal unix process wrapper. The other thinks that Docker should only be used for stateless applications like Microservices and AWS Lambda style apps. We've engineered proof of concepts for both. On our docker cluster we have a shared drive that gets mounted when the Docker host is mounted, and if a Database in a container is mounted, it simply mounts a volume to the shared drive. My friend still sticks to his position, despite being shown the contrary evidence. (He also argues that Docker adds unnecessary risk by adding complexity to the stack.) I'm trying to listen and understand his point of view, both in an act of empathy, but also to better reason with him. (We all get on quite well - so this is a mix of in-jest and serious discussion). Kind of the question behind the question is: are databases cattle ? This comment suggests that a good automated backup and retrieval strategy for your database is indistinguishable from a cattle server. My question is: What are the reasons Docker should not be used for databases? EDIT: People have asked me to clarify my terminology. I was assuming that the database application was in the container, and the storage was in the volume. What I meant was, the RDBMS is in the container, and the database storage is in the volume. Some commentators have suggested that the docker volume drivers aren't going to work with database writes very well. (Or something to that effect). Could you please expand on that? | When people talk about running a database in Docker, they do not mean to store the data in a container; they are talking about having a docker image with the DB software, and mounting the data as a volume (a bind volume, not a container volume). Volumes are an essential part in Docker, and are not something that is flakey or just tacked on. Docker is not just made for stateless (micro)services. Wish as I might, I cannot find a technical reason not to run a database in a Docker, so unfortunately I'll pick the other side of the argument and hence maybe not give you the answer you are looking for. (I'm using Oracle as an example because I'm familiar with it, both bare metal and dockerized, and because it's quite a notorious beast for being just a bit non-trivial to operate if you go past default settings.) Packaging up the DB software itself in a container gives you the usual benefits - having the same version everywhere, avoiding dependency/shared library issues, being able to spin up the exact same DB on developer laptops or wherever you need it. It is a snap getting it to run anywhere; updating is trivial, and so on. All the Docker benefits apply. There is an Oracle image on Dockerhub which allows you to spin up a working DB in a minute or three (and for the others as well, of course). People did do performance tests and found no I/O differences between volumes and bare metal ( https://www.percona.com/blog/2016/02/11/measuring-docker-io-overhead/ , https://stackoverflow.com/questions/21889053/what-is-the-runtime-performance-cost-of-a-docker-container ). Under the hood, it's not like Docker somehow intercepts all I/O, anyway. It just gets creative with standard Linux tools (bind mounts in this case, mangling of the internal kernel tables that make the Docker-fu possible at all). Obviously that does not mean that you can run two instances of the DB and just have them work on the same files, but nobody is implying that. Docker does not give you automatic simultaneous and magically race-free access to volumes, and did never pretend to do so. The rest of the benefits still apply. If your DB itself does not detect conflicts like this, you better supply a CMD script to the image which refuses spinning up a second container when the volume is already in use. You have to be a little more careful spinning up/shutting down the container (just as you would not simply switch off a bare metal DB server), but that should be quite manageable. Now, depending on circumstances, there may be soft reasons not to do it: Oracle (the company), for example, might not support you if you run their RDBMS in a Docker container on production systems (in 2021, 3 years after writing this answer, it is unclear to me if that is still true). But maybe you are using dockerized Oracle RDBMS images only for your developers and the testing environment, where you would not need their support in any case, reserving it for a bare metal production server. (But don't forget to pay your licenses...). If the ops guys are unfamiliar with Docker, it might just be a bit easier to accidently kill everything, destroy your data files etc.. If you have big dedicated metal DB machines already, with large amounts of very fast dedicated SAN storage, and running nothing else anyways, then there would just be no point in using Docker to containerize those as you will never just spin another server up when there are 100s of GB or even TB of data. After all, for production, a RDBMS like Oracle is very, very advanced in all the replication, data integrety, no-downtime failover, etc. aspects. Note that this argument just says "you do not need to containerize your RDBMS". It does not say "you should not do it" - maybe you want to do it because you wish to roll out database software upgrades through containers or for whatever other reason you could imagine. So there you go. By all means do dockerize your DB, at the very least for your developers (who will be eternally thankful) and your testing environments. On the production, it will come down to taste, and there at least, I would also prefer the solution that sits best with the specialized DBA/Ops - if they have decades of experience working bare metal DB servers, then by all means trust them to continue so. But if you are a startup who has all IT in the cloud anyways, then a Docker container would just be one further piece of onion in the whole picture. | {
"source": [
"https://devops.stackexchange.com/questions/1293",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/465/"
]
} |
1,658 | I'm new to Ansible. Here's my task ... I have 400+ hosts, and I need to verify if 5 different ports are open from their end to our web server. Individually, I could log in and run: telnet mywebserver.com 443
telnet mywebserver.com 80
telnet mywebserver.com 8443 ..and so on.. What module or plugin could be used in Ansible so I could automate this, and have it report the results (whether open or closed ports) back to my Ansible server? | You can use the Ansible wait_for module which checks a specific TCP port is open. Since in this case, all ports should be open already, we can use a minimal no. of retries, just enough to cover network issues: - name: Check all port numbers are accessible from the current host
wait_for:
host: mywebserver.com
port: "{{ item }}"
state: started # Port should be open
delay: 0 # No wait before first check (sec)
timeout: 3 # Stop checking after timeout (sec)
ignore_errors: yes
with_items:
- 443
- 80
- 80443 By default, Ansible will check once every second (configurable in Ansible 2.3 using the sleep attribute), so this will check 3 times per port. Run this in a playbook against your inventory of 400+ hosts - Ansible will check in parallel that all hosts can reach mywebserver.com on those ports. the parallelism is subject to the forks setting in your ansible.cfg . We use ignore_errors: yes here so that any errors are marked in red but do not stop execution. Open ports are reported as ok items in output and closed ports are reported as failed (you must use -vv flag on ansible-playbook to see this output). Fine-tuning output If you want more specific output for the success and failure cases, the code must be more complex, adding a second task: wait_for task must register a variable the second task produces output using debug based on success/failure condition (e.g. using Jinja2 conditional expression ) then you need to put both these tasks in an include file (without any with_items loop), and write a main playbook task that uses an include ... with_items to call the include file once per port. | {
"source": [
"https://devops.stackexchange.com/questions/1658",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/3781/"
]
} |
1,750 | We have the following block in our Dockerfile : RUN yum -y update
RUN yum -y install epel-release
RUN yum -y groupinstall "Development Tools"
RUN yum -y install python-pip git mysql-devel libxml2-devel libxslt-devel python-devel openldap-devel libffi-devel openssl-devel I've been told that we should unite these RUN commands to cut down on created docker layers: RUN yum -y update \
&& yum -y install epel-release \
&& yum -y groupinstall "Development Tools" \
&& yum -y install python-pip git mysql-devel libxml2-devel libxslt-devel python-devel openldap-devel libffi-devel openssl-devel I'm very new to docker and not sure I completely understand the differences between these two versions of specifying multiple RUN commands. When would one unite RUN commands into a single one and when it makes sense to have multiple RUN commands? | A docker image is actually a linked list of filesystem layers. Each instruction in a Dockerfile creates a filesystem layer that describes the differences in the filesystem before and after execution of the corresponding instruction. The docker inspect subcommand can be used on a docker image to reveal its nature of being a linked list of filesystem layers. The number of layers used in an image is important when pushing or pulling images, as it affects the number of concurrent uploads or downloads occuring. when starting a container, as the layers are combined together to produce the filesystem used in the container; the more layers are involved, the worse the performance is, but the different filesystem backends are affected differently by this. This has several consequences for how images should be built. The first and most important advice I can give is: Advice #1 Make sure that the build steps where your source code is involved comes as late as possible in the Dockerfile and are not tied to previous commands using a && or a ; . The reason for this, is that all the previous steps will be cached and the corresponding layers will not need to be downloaded over and over again. This means faster builds and faster releases, which is probably what you want. Interestingly enough, it is surprisingly hard to make optimal use of the docker cache. My second advice is less important but I find it very useful from a maintenance view point: Advice #2 Do not write complex commands in the Dockerfile but rather use scripts that are to be copied and executed. A Dockerfile following this advice would look like COPY apt_setup.sh /root/
RUN sh -x /root/apt_setup.sh
COPY install_pacakges.sh /root/
RUN sh -x /root/install_packages.sh and so on. The advice of binding several commands with && has only a limited scope. It is much easier to write with scripts, where you can use functions, etc. to avoid redundancy or for documentation purposes. People interested by pre-processors and willing to avoid the small overhead caused by the COPY steps and are actually generating on-the-fly a Dockerfile where the COPY apt_setup.sh /root/
RUN sh -x /root/apt_setup.sh sequences are replaced by RUN base64 --decode … | sh -x where the … is the base64-encoded version of apt_setup.sh . My third advice is for people who wants to limit the size and the number of layers at the possible cost of longer builds. Advice #3 Use the with -idiom to avoid files present in intermediary layers but not in the resulting filesystem. A file added by some docker instruction and removed by some later instruction is not present in the resulting filesystem but it is mentioned two times in the docker layers constituting the docker image in construction. Once, with name and full content in the layer resulting from the instruction adding it, and once as a deletion notice in the layer resulting from the instruction removing it. For instance, assume we temporarily need a C compiler and some image and consider the # !!! THIS DISPLAYS SOME PROBLEM --- DO NOT USE !!!
RUN apt-get install -y gcc
RUN gcc --version
RUN apt-get --purge autoremove -y gcc (A more realistic example would build some software with the compiler instead of merely asserting its presence with the --version flag.) The Dockerfile snippet creates three layers, the first one contains the full gcc suite so that even if it is not present in the final filesystem the corresponding data is still part of the image in same manner and need to be downloaded, uploaded and unpacked whenever the final image is. The with -idiom is a common form in functional programming to isolate resource ownership and resource releasing from the logic using it. It is easy to transpose this idiom to shell-scripting, and we can rephrase the previous commands as the following script, to be used with COPY & RUN as in Advice #2. # with_c_compiler SIMPLE-COMMAND
# Execute SIMPLE-COMMAND in a sub-shell with gcc being available.
with_c_compiler()
(
set -e
trap 'apt-get --purge autoremove -y gcc' EXIT
apt-get install -y gcc
"$@"
)
with_c_compiler\
gcc --version Complex commands can be turned into function so that they can be fed to the with_c_compiler . It is also possible to chain calls of several with_whatever functions, but maybe not very desirable. (Using more esoteric features of the shell, it is certainly possible to make the with_c_compiler accept complex commands, but it is in all aspects preferable to wrap these complex commands into functions.) If we want to ignore Advice #2, the resulting Dockerfile snippet would be RUN apt-get install -y gcc\
&& gcc --version\
&& apt-get --purge autoremove -y gcc which is not so easy to read and maintain because of the obfuscation. See how the shell-script variant outs emphasis on the important part gcc --version while the chained- && variant buries that part in the middle of noise. | {
"source": [
"https://devops.stackexchange.com/questions/1750",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/60/"
]
} |
1,898 | There are quite some questions and answers that mention " artifactory ". I wouldn't be surprised if it is somehow related to artifacts . My questions : What is actually an "artifactory" (in the context of DevOps)? Why are artifactories used? | Artifactory is a product by JFrog that serves as a binary repository manager . That said very often one will use a 'artifactory' as a synonym of the more general binary repository, much like many people use Frigidaire or fridge to denote the refrigerator regardless if it is a Frigidaire brand or not. The binary repository is a natural extension to the source code repository, in that it will store the outcome of your build process, often denoted as artifacts. Most of the times one would not use the binary repository directly but through a package manager that comes with the chosen technology. In most cases these will store individual application components that can later be assembled into a full product - thus allowing a build to be broken in smaller chunks, making more efficient use of resources, reducing build times, better tracking of binary debug databases etc. Here are some of the most popular package managers that can be managed using a binary repository: Java: jar, ear, war etc has Maven and the official MavenCentral . There are many other package managers that will use the maven binary repository format as well ( ivy , gradle etc). .Net: nuget for .NET components (DLL and EXE) but can also be used as a distribution mechanism under windows thorugh systems like Chocolatey . Newer versions of Powershell can also leverage this to distribute powershell modules though the powershell gallery of which one could build a local distribution with a binary repository and a repository in nuget format. Also check OneGet if Windows distribution management is of interest to you. In JavaScript: we have npm which is one of the most popular, will require nodejs . In python: there is pip and the official package index pypi , which one can also create a local instance through binary repository that will support the format. This list is far from complete, just gives an idea of what's out there. The binary repository can allow to host all of these under one roof, making their management much simpler for teams. Note that you do not need a very large team to start reaping benefits from binary package management. The initial investment is not very large and the benefits are felt immediately. Especially now that more and more platforms, frameworks and languages are integrating this dependency management directly in them.
Their biggest advantage I have found however was to create an environment that your programmers will find natural and comfortable making it essential. It helps you as a devops creating a solid tool-chain and it helps them making the overall experience fit naturally in their stack of choice. As I said earlier there are many products out there that can serve as binary package managers, some more generic than others in their target usage, varying also widely in their accessibility and prices. My personal opinion is that binary repositories are as vital a part of a well designed devops setup as the source code repository or continuous integration. | {
"source": [
"https://devops.stackexchange.com/questions/1898",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/40/"
]
} |
1,943 | Finally, you are so much in love with Docker that you want to move your online business-critical production systems with sensitive customer data to a Docker Swarm. Some might even already have done so. The other organization can't afford it by a policy forbidding production processes running in root mode. What could be a checklist of building blocks to consider for a Docker production environment? One does not need all of them, but all of them should be important to be assessed. Disclaimer: I know there is a SE policy to avoid "large endless lists" but I think this checklist cannot be very big... and endless noway. So - what are these buildings blocks? If not already deployed, consider running a Linux host system with advanced
security settings - hardened kernel, SELinux etc. Consider using a tiny Docker base image, like alpine, busybox or even scratch e.g. start with an empty base image Use USER setting other than root Carefully assess to further reduce the already shrinked set of kernel capabilities granted to container Consider having only one executable binary per container to launch your process, ideally statically linked Those who want to break your system to get a shell access might wonder if they found out your container has all shells disabled Mount read-only volumes where only possible Question: what else? | The host on which the containers are running Run the docker security bench on every node that runs docker containers https://github.com/docker/docker-bench-security Running the following command on a node that runs docker containers: docker run -it --net host --pid host --cap-add audit_control \
-e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
-v /var/lib:/var/lib \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/lib/systemd:/usr/lib/systemd \
-v /etc:/etc --label docker_bench_security \
docker/docker-bench-security returns a list of checks: [INFO] 1 - Host Configuration
[WARN] 1.1 - Ensure a separate partition for containers has been created
[NOTE] 4.2 - Ensure that containers use trusted base images
[PASS] 4.6 - Ensure HEALTHCHECK instructions have been added to the container image Quote from the repository README: The Docker Bench for Security is a script that checks for dozens of
common best-practices around deploying Docker containers in
production. The tests are all automated, and are inspired by the CIS
Docker Community Edition Benchmark
v1.1.0 . Some of the issues that are reported by the security bench could be solved by reading the official docker security article and comparing it with the bullets that are defined in the question the following things are important as well: protect the docker daemon socket by implementing ssl content trust using notary and DOCKER_CONTENT_TRUST variable | {
"source": [
"https://devops.stackexchange.com/questions/1943",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/707/"
]
} |
2,191 | I've taken over the project where a lot of Jenkins credentials has passwords or passphrase strings which I need to know in order to progress with the project, unfortunately these weren't documented anywhere. I've checked the credentials.xml file where these credentials are stored, but they're in not plain text, e.g.: <passphrase>{AAAAAAAAAAAANzxft/rDzyt8mhxpn3O72dxvVqZksL5vBJ4jNKvAjAA=}</passphrase> Note: I've changed it slightly for privacy reasons. How can I decrypt its original password based on the string above? | Luckily there is a hudson.util.Secret.decrypt() function which can be used for this, so: In Jenkins, go to: /script page. Run the following command: println(hudson.util.Secret.decrypt("{XXX=}")) or: println(hudson.util.Secret.fromString("{XXX=}").getPlainText()) where {XXX=} is your encrypted password. This will print the plain password. To do opposite, run: println(hudson.util.Secret.fromString("some_text").getEncryptedValue()) Source: gist at tuxfight3r/jenkins-decrypt.groovy . Alternatively check the following scripts: tweksteen/jenkins-decrypt , menski/jenkins-decrypt.py . For more details, check: Credentials storage in Jenkins . | {
"source": [
"https://devops.stackexchange.com/questions/2191",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/3/"
]
} |
2,506 | I am using a third party library that creates sibling docker containers via: docker run -d /var/run/docker.sock:/var/run/docker.sock ... I am trying to create a Kubernetes deployment out of the above container, but currently getting: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? This is expected because I am not declaring /var/run/docker.sock as a volume in the deployment yaml. The problem is I don't know how to do this. Is it possible to mount /var/run/docker.sock as a volume in a deployment yaml? If not, what is the best approach to run docker sibling-containers from within a Kubernetes deployment/pod? | Unverified as it sounds brittle to me to start a container outside of k8s supervision, but you should be able to mount /var/run/docker.sock with a hostPath volume . Example variation from the documentation: apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock-volume
volumes:
- name: docker-sock-volume
hostPath:
# location on host
path: /var/run/docker.sock
# this field is optional
type: File I think a simple mount should be enough to allow communication from docker client within the container to docker daemon on host but in case you get a write permission error it means you need to run your container as privileged container
using a securityContext object like such (just an extract from above to show the addition, values taken from the documentation ): spec:
containers:
- image: gcr.io/google_containers/test-webserver
securityContext:
privileged: true
name: test-container | {
"source": [
"https://devops.stackexchange.com/questions/2506",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/4939/"
]
} |
2,731 | I want to manually download a Docker Image from Docker Hub . More specifically, I want to download a Docker Image from Docker Hub on a machine in a restricted environment which does not (and cannot) have the Docker client software installed. I would have thought that this would be possible using the official API , but this does not appear to be the case - see the following discussion: Fetch docker images without docker command. e.g. with wget Is it really the case that the API doesn't support downloading images? Is there a way to work around this? UPDATE 1: I came across the following ServerFault post: Downloading docker image for transfer to non-internet-connected machine The accepted solution uses the docker save command, which doesn't help in my situation. But another solution posted there cites the following StackOverflow post: Pulling docker images One of the solutions there refers to a command-line tool called docker-registry-debug which, among other things, can generate a curl command for downloading an image. Here is what I got: user@host:~$ docker-registry-debug curlme docker ubuntu
# Reading user/passwd from env var "USER_CREDS"
# No password provided, disabling auth
# Getting token from https://index.docker.io
# Got registry endpoint from the server: https://registry-1.docker.io
# Got token: signature=1234567890abcde1234567890abcde1234567890,repository="library/docker",access=read
curl -i --location-trusted -I -X GET -H "Authorization: Token signature=1234567890abcde1234567890abcde1234567890,repository="library/docker",access=read" https://registry-1.docker.io/v1/images/ubuntu/layer
user@host:~$ curl \
-i --location-trusted -I -X GET \
-H "Authorization: Token signature=1234567890abcde1234567890abcde1234567890,repository="library/docker",access=read"
https://registry-1.docker.io/v1/images/ubuntu/layer
HTTP/1.1 404 NOT FOUND
Server: gunicorn/18.0
Date: Wed, 29 Nov 2017 01:00:00 GMT
Expires: -1
Content-Type: application/json
Pragma: no-cache
Cache-Control: no-cache
Content-Length: 29
X-Docker-Registry-Version: 0.8.15
X-Docker-Registry-Config: common
Strict-Transport-Security: max-age=31536000 So unfortunately it looks like the curl command generated does not work. UPDATE 2: It looks like I'm able to download layer blobs from Docker Hub. Here is how I'm currently going about it. Get an authorization token: user@host:~$ export TOKEN=\
"$(curl \
--silent \
--header 'GET' \
"https://auth.docker.io/token?service=registry.docker.io&scope=repository:library/ubuntu:pull" \
| jq -r '.token' \
)" Pull an image manifest: user@host:~$ curl \
--silent \
--request 'GET' \
--header "Authorization: Bearer ${TOKEN}" \
'https://registry-1.docker.io/v2/library/ubuntu/manifests/latest' \
| jq '.' Pull an image manifest and extract the blob sums: user@host:~$ curl \
--silent \
--request 'GET' \
--header "Authorization: Bearer ${TOKEN}" \
'https://registry-1.docker.io/v2/library/ubuntu/manifests/latest' \
| jq -r '.fsLayers[].blobSum'
sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
sha256:be588e74bd348ce48bb7161350f4b9d783c331f37a853a80b0b4abc0a33c569e
sha256:e4ce6c3651b3a090bb43688f512f687ea6e3e533132bcbc4a83fb97e7046cea3
sha256:421e436b5f80d876128b74139531693be9b4e59e4f1081c9a3c379c95094e375
sha256:4c7380416e7816a5ab1f840482c9c3ca8de58c6f3ee7f95e55ad299abbfe599f
sha256:660c48dd555dcbfdfe19c80a30f557ac57a15f595250e67bfad1e5663c1725bb Download a single layer blob and write it to a file: user@host:~$ BLOBSUM=\
"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
user@host:~$ curl \
--silent \
--location \
--request GET \
--header "Authorization: Bearer ${TOKEN}" \
"https://registry-1.docker.io/v2/library/ubuntu/blobs/${BLOBSUM}" \
> "${BLOBSUM/*:/}.gz" Write all of the blob sums to a file: user@host:~$ curl \
--silent \
--request 'GET' \
--header "Authorization: Bearer ${TOKEN}" \
'https://registry-1.docker.io/v2/library/ubuntu/manifests/latest' \
| jq -r '.fsLayers[].blobSum' > ubuntu-blobsums.txt Download all of the layer blobs from the manifest: user@host:~$ while read BLOBSUM; do
curl \
--silent \
--location \
--request 'GET' \
--header "Authorization: Bearer ${TOKEN}" \
"https://registry-1.docker.io/v2/library/ubuntu/blobs/${BLOBSUM}" \
> "${BLOBSUM/*:/}.gz"; \
done < blobsums.txt Now I have a bunch of layer blobs and I need to recombine them into an image - I think. Related Links: Docker Community Forums: Docker Hub API retrieve images Docker Community Forums: Manual download of Docker Hub images Docker Issue #1016: Fetch docker images without docker command. e.g. with wget ServerFault: Downloading docker image for transfer to non-internet-connected machine StackOverflow: Downloading docker image for transfer to non-internet-connected machine StackOverflow: How to download docker images without using pull command? StackOverflow: Is there a way to download docker hub images without “docker pull” for a machine with out Internet access? StackOverflow: Docker official registry (Docker Hub) URL | It turned out that the Moby Project has a shell script on the Moby Github which can download images from Docker Hub in a format that can be imported into Docker: download-frozen-image-v2.sh The usage syntax for the script is given by the following: download-frozen-image-v2.sh target_dir image[:tag][@digest] ... The image can then be imported with tar and docker load : tar -cC 'target_dir' . | docker load To verify that the script works as expected, I downloaded an Ubuntu image from Docker Hub and loaded it into Docker: user@host:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user@host:~$ tar -cC 'ubuntu' . | docker load
user@host:~$ docker run --rm -ti ubuntu bash
root@1dd5e62113b9:/# In practice I would have to first copy the data from the internet client (which does not have Docker installed) to the target/destination machine (which does have Docker installed): user@nodocker:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user@nodocker:~$ tar -C 'ubuntu' -cf 'ubuntu.tar' .
user@nodocker:~$ scp ubuntu.tar user@hasdocker:~ and then load and use the image on the target host: user@hasdocker:~ docker load ubuntu.tar
user@hasdocker:~ docker run --rm -ti ubuntu bash
root@1dd5e62113b9:/# | {
"source": [
"https://devops.stackexchange.com/questions/2731",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/3070/"
]
} |
3,073 | Currently, I'm going to need an implementation that must find all files within a directory and start a parallel task for every file found. Is it possible to achieve this using declarative pipelines? pipeline {
agent any
stages {
stage("test") {
steps {
dir ("file_path") {
// find all files with complete path
parallel (
// execute parallel tasks for each file found.
// this must be dynamic
}
}
}
}
}
}
} | Managed to solve it with the following code: pipeline {
agent { label "master"}
stages {
stage('1') {
steps {
script {
def tests = [:]
for (f in findFiles(glob: '**/html/*.html')) {
tests["${f}"] = {
node {
stage("${f}") {
echo '${f}'
}
}
}
}
parallel tests
}
}
}
}
} | {
"source": [
"https://devops.stackexchange.com/questions/3073",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/6066/"
]
} |
3,104 | I need to make some configuration changes on our Jenkins instance that will involve restarting Jenkins a couple of times. However, our developers are committing frequently enough that I haven't seen Jenkins without jobs running in three days. Is there a native way (either through the GUI or via command line) to safe-restart Jenkins? IE: wait for current jobs to finish before going down, and keep track of queued jobs to start once Jenkins comes back up. I know there's a plugin but in order to install it I need to restart Jenkins... | If you navigate to $YOUR_JENKINS_URL/updateCenter/ you should see the following page: Here you can check Restart Jenkins when installation is complete and no jobs are running which should be fairly safe. | {
"source": [
"https://devops.stackexchange.com/questions/3104",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/736/"
]
} |
3,127 | I want to copy files recursively to a Kubernetes pod I tried kubectl cp -r I got: error: unknown shorthand flag: 'r' in -r What are the best ways to transfer whole directories recursively into a pod. | kubectl cp by default does recursive copies when given a directory, although it seems to be picky about trailing slashes. If foo/bar is the directory you'd like to copy, simply run kubectl cp /path/to/foo/bar <pod-id>:/path/in/container/foo/ | {
"source": [
"https://devops.stackexchange.com/questions/3127",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/2494/"
]
} |
3,400 | I have a multibranch job set to run any branch with a Jenkinsfile. I have some options I can think of if I want to remove a branch from the list of jobs running for the multi-branch pipeline. I can delete the branch I can delete the Jenkinsfile in that branch The second solution is good, except I need to commit and push that to the git repo for my branch, and if that branch is merged into another branch it blows away the Jenkinsfile. What is the best way to disable only some branches of a multibranch pipeline? | Jenkins can filter branches in a multibranch pipeline by name using a wildcard or regular expression. | {
"source": [
"https://devops.stackexchange.com/questions/3400",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/2494/"
]
} |
3,902 | I have a base docker image which is used to run image analysis software. For each container created from the image, there are a set of configuration settings, some of which are secrets (encryption keys, customer information, etc.), that are used by the software to analyze and distribute the processed images. How can I safely pass these secrets to a container? | You have 3 methods to get secrets to an app inside a docker container. The first 2 involve docker configuration. The last one is to have your apps directly fetch secrets from a secret store. 1 - Environment variables According to "The 12 Factor App" guide , secrets are merely config, and they should always be set in the environment. You could set your secrets as environment variables during the docker run, and your app accesses them from there. 2 - Mounted volumes You could have your secrets all within a particular configuration/secrets file, then mount that to your instance as a mounted volume . 3 - Fetch from secret store As @030 mentioned, you can use Hashicorp Vault (or "Amazon Secrets Manager", or any service like that). Your app, or a sidecar app can fetch the secrets it needs directly, without having to deal with any configuration on the Docker container. This method would allow you to use Dynamically created secrets (a very appealing feature of such systems) and without having to worry about the secrets being view-able from the file system or from inspecting the env variables of the docker container. Personal Opinion I believe env variables is the way to go. It's easier to manage, and you can still pull from a secret store like Hashicorp Vault, if you have you CI build system pull the secrets during the build and set them when you deploy. You get the best of both worlds, and the added benefit of your developers not needing to write application code to fetch secrets. Devs should be focused on their code functionality, and not dealing with admin tasks like fetching passwords. Your application's code should be focused on it's own app functionality itself, and not dealing with backend tasks like fetching passwords. Just like the 12 Factor App states. Edit: changed last sentence to remove implication of Developer vs SysAdmin silo-ing. The tasks themselves should be separate from a code perspective, but DevOps is about the same persons keeping both in mind and not be limited. Personal Opinion (Update) Per @Dirk's excellent comment ( Passing secrets to a Docker container ), there is a very strong argument to prioritize a secret store over ENV vars, due to not wanting to leak them. | {
"source": [
"https://devops.stackexchange.com/questions/3902",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/4328/"
]
} |
3,980 | What does "Does not have minimum availability" mean? A GitHub discussion was found, but it is not clear to me what the error message means. | As @Tensibai indicated in one the comments, this could be caused as there is insufficient CPU or memory, but that is not always the case. For example, a helm chart was just deployed, it failed and the workload in GCP indicated that: Pod errors: CrashLoopBackOff Based on the comment of @Tensibai the first impression was that there were insufficient resources, but further analysis using kubectl describe pod <pod-name> indicated that in this case the livenessProbe check failed: Liveness probe failed: Get http://10.16.0.13:80/: dial
tcp 10.16.0.13:80: getsockopt: connection refused In summary, the Does not have minimum availability message is generic. Multiple issues could trigger this and more in dept analysis is required to find the actual error. | {
"source": [
"https://devops.stackexchange.com/questions/3980",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/210/"
]
} |
4,292 | I have my security groups in a securitygroup.tf file. In the same dir there are plenty of other resource descriptions (rds, ec2 etc). Is there a way to perform a terraform apply --auto-approve only for my securitygroups.tf ? | Not really. The standard way to work around this though is to use eg: terraform apply -target=aws_security_group.my_sg but that's only going to apply one security group at a time, so will get tedious if you have a lot of them. You can, however, target multiple resources in one command: terraform apply -target=aws_security_group.my_sg -target=aws_security_group.my_2nd_sg However, there are potentially a couple of workarounds: The -target parameter respects dependencies. This means if you were to eg. -target=aws_instance.my_server and that instance had, say, five security groups attached to it via interpolation, changes to those security groups should be included in the plan (I haven't thoroughly tested this, but I believe this is how it works). That is a bit messy though, as you probably don't want to touch an instance. A safer alternative might be using something like a null_resource to provide a target for the security groups, but again I haven't tried this (there might be another 'safe' resource you could rely on, though?). Create a module. You can target a module just like you can target a plain resource (be sure to include the quotes around the target name): terraform apply -target="module.my_security_groups" Inside this module, you could define all of your security groups - just like you would have outside of the module. As well as being able to target it directly, this also makes it easier for you to re-use the same set of security groups for other infrastructure, if you ever need to. | {
"source": [
"https://devops.stackexchange.com/questions/4292",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/4613/"
]
} |
4,540 | I've got the following Dockerfile : FROM ubuntu:xenial
RUN useradd -d /home/ubuntu -ms /bin/bash -g root -G sudo -p ubuntu ubuntu
WORKDIR /home/ubuntu
USER ubuntu
VOLUME /opt/myvolume Which I built it: $ docker build -t vol-test .
Sending build context to Docker daemon 2.048kB
Step 1/5 : FROM ubuntu:xenial
---> 0b1edfbffd27
Step 2/5 : RUN useradd -d /home/ubuntu -ms /bin/bash -g root -G sudo -p ubuntu ubuntu
---> Using cache
---> d82e3ecc5fe8
Step 3/5 : WORKDIR /home/ubuntu
---> Using cache
---> ab1db29ee8bf
Step 4/5 : USER ubuntu
---> Using cache
---> 129393a35d9e
Step 5/5 : VOLUME /opt/myvolume
---> Running in 691a4cbd077e
Removing intermediate container 691a4cbd077e
---> 11bc9e9db9d3
Successfully built 11bc9e9db9d3
Successfully tagged vol-test:latest However, when run, the /opt/myvolume directory is owned by root , not ubuntu : $ docker run vol-test id
uid=1000(ubuntu) gid=0(root) groups=0(root),27(sudo)
$ docker run vol-test find /opt/myvolume -ls
66659 4 drwxr-xr-x 2 root root 4096 Jul 18 23:02 /opt/myvolume
$ docker run -u ubuntu vol-test find /opt/myvolume -ls
66940 4 drwxr-xr-x 2 root root 4096 Jul 18 23:12 /opt/myvolume because it's created during the run. Is it possible to define or change the default owner of VOLUME directory in Dockerfile ? I'm running it on macOS and Linux. | As stated in the documentation , VOLUME instruction inherits the directory content and permissions existing in the container, so you can workaround the problem with a dockerfile like this: FROM ubuntu:xenial
RUN useradd -d /home/ubuntu -ms /bin/bash -g root -G sudo -p ubuntu ubuntu
RUN mkdir /opt/myvolume && chown ubuntu /opt/myvolume
WORKDIR /home/ubuntu
VOLUME /opt/myvolume The creation of the directory has to be done as root (to be able to write within /opt). | {
"source": [
"https://devops.stackexchange.com/questions/4540",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/3/"
]
} |
4,546 | I have an issue with amazon free tier account, I have deployed two EC2 instances (1 linux and 1 windows) on amazon each having 30GB of storage which is permissible under free tier, these instances are continuously running 24/7 so that they are always available,what I don't understand is that starting last month I have had charges to my account for exceeding cloud storage usage that have left me confused. I have read details on Free Tier more than 5 times but I still can't understand what I'm doing wrong, checked the Cost Explorer, the only thing that comes to mind is perhaps the 30GB is applied universally for all instances but I am not sure as I don't know how my account usage is calculated. I have tried contacting support assistant but I am yet to receive any response from them. I would really appreciate if someone helped set some things straight because I have hit a wall with this one. | As stated in the documentation , VOLUME instruction inherits the directory content and permissions existing in the container, so you can workaround the problem with a dockerfile like this: FROM ubuntu:xenial
RUN useradd -d /home/ubuntu -ms /bin/bash -g root -G sudo -p ubuntu ubuntu
RUN mkdir /opt/myvolume && chown ubuntu /opt/myvolume
WORKDIR /home/ubuntu
VOLUME /opt/myvolume The creation of the directory has to be done as root (to be able to write within /opt). | {
"source": [
"https://devops.stackexchange.com/questions/4546",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/8853/"
]
} |
4,553 | So I'm automating a process we currently have where we deploy a database and then have an exe that 'activates' the database. The default values are different for each product and is determined by a cmd line arg. I've created 3 VSTS Release environments based on the hardware IO board for the environment. How can I pass in this product flag at deploy time as it isn't dependent on the environment or release itself. Edit:
The product flag value isn't known until the database is deployed into an environment, and value can not be set in the release definition creation time or at release time. Only when deployed into a particular environemnt | As stated in the documentation , VOLUME instruction inherits the directory content and permissions existing in the container, so you can workaround the problem with a dockerfile like this: FROM ubuntu:xenial
RUN useradd -d /home/ubuntu -ms /bin/bash -g root -G sudo -p ubuntu ubuntu
RUN mkdir /opt/myvolume && chown ubuntu /opt/myvolume
WORKDIR /home/ubuntu
VOLUME /opt/myvolume The creation of the directory has to be done as root (to be able to write within /opt). | {
"source": [
"https://devops.stackexchange.com/questions/4553",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/3062/"
]
} |
4,864 | I can docker run -p 3000:3000 image without EXPOSE ing that port in the container (see below). If that's true, then why bother putting EXPOSE in the Dockerfile? Is it just for communication to image users? Because I don't know of a functional reason to EXPOSE ports if they are all bindable anyways. Here are the steps showing me binding to a port in a container despite the fact it is not EXPOSEd $ cat Dockerfile
FROM alpine
RUN apk add nodejs npm vim
COPY webserver /webserver
CMD [ "node", "/webserver/index.js" ]
$ docker build .
Sending build context to Docker daemon 1.931MB
Step 1/4 : FROM alpine
---> 11cd0b38bc3c
Step 2/4 : RUN apk add nodejs npm vim
---> Using cache
---> 4270f8bdb201
Step 3/4 : COPY webserver /webserver
---> Using cache
---> 67f4cda61ff0
Step 4/4 : CMD [ "node", "/webserver/index.js" ]
---> Using cache
---> 1df8f9024b85
Successfully built 1df8f9024b85
$ curl localhost:4400
curl: (7) Failed to connect to localhost port 4400: Connection refused
$ docker run -d -p 4400:3000 1df8f9024b85
7d0e6c56f8ad8827fe72830a30c1aac96821104b8ea111291ca39e6536aad8fd
$ curl localhost:4400
Hello World!
$ | Docker's EXPOSE documentation addresses this specific point: The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published. To actually publish the port when running
the container, use the -p flag on docker run to publish and map
one or more ports, or the -P flag to publish all exposed ports and
map them to high-order ports. Pay attention to the last sentence, if you expose multiple ports then -P becomes useful to avoid setting multiple -p on the command line. | {
"source": [
"https://devops.stackexchange.com/questions/4864",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/9488/"
]
} |
6,085 | How can i get the Jenkins console output in a text file?
I want to share it with someone, is there any way to do it? | if you want just access the log and download it as a txt file to your workspace from the job's URL: ${BUILD_URL}/consoleText On Linux, you can use wget to download it to your workspace wget ${BUILD_URL}/consoleText The actual log file on the file system is in the Master machine. You can find it under: $JENKINS_HOME/jobs/$JOB_NAME/builds/lastSuccessfulBuild/log | {
"source": [
"https://devops.stackexchange.com/questions/6085",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/11797/"
]
} |
12,092 | I'm trying to run rabbitMQ using docker-compose , but the service is always starting or unhealthy. rabbit is running fine, so I suspect there is something wrong with my health check. Running the healthcheck command locally does return a value. > curl -f http://localhost:5672
AMQP % But docker-compose ps always says the service is unhealthy (or starting, before it runs out of time). > docker-compose ps
docker-entrypoint.sh rabbi ... Up (unhealthy) 15671/tcp Here is what my docker-compose.yml file looks like. # docker-compose.yml
version: '2.3' # note: I can't change this version, must be 2.3
volumes:
rabbit-data:
services:
rabbit:
hostname: 'rabbit'
image: rabbitmq:3.8.5-management
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5672"]
interval: 30s
timeout: 30s
retries: 3
ports:
- '5672:5672'
- '15672:15672'
volumes:
- 'rabbit-data:/var/lib/rabbitmq/mnesia/'
networks:
- rabbitmq
networks:
rabbitmq:
driver: bridge I have also tried using nc instead of curl in the healthcheck, but got the same result. healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ] From https://github.com/docker-library/rabbitmq/issues/326 | You could use the command rabbitmq-diagnostics -q ping in case you just need a basic check. healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 30s
timeout: 30s
retries: 3 More information on how to run more advanced health checks could be found here | {
"source": [
"https://devops.stackexchange.com/questions/12092",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/4553/"
]
} |
13 | In speech recognition, the front end generally does signal processing to allow feature extraction from the audio stream. A discrete Fourier transform (DFT) is applied twice in this process. The first time is after windowing; after this Mel binning is applied and then another Fourier transform. I've noticed however, that it is common in speech recognizers (the default front end in CMU Sphinx , for example) to use a discrete cosine transform (DCT) instead of a DFT for the second operation. What is the difference between these two operations? Why would you do DFT the first time and then a DCT the second time? | The Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) perform similar functions: they both decompose a finite-length discrete-time vector into a sum of scaled-and-shifted basis functions. The difference between the two is the type of basis function used by each transform; the DFT uses a set of harmonically-related complex exponential functions, while the DCT uses only (real-valued) cosine functions. The DFT is widely used for general spectral analysis applications that find their way into a range of fields. It is also used as a building block for techniques that take advantage of properties of signals' frequency-domain representation, such as the overlap-save and overlap-add fast convolution algorithms. The DCT is frequently used in lossy data compression applications, such as the JPEG image format. The property of the DCT that makes it quite suitable for compression is its high degree of "spectral compaction;" at a qualitative level, a signal's DCT representation tends to have more of its energy concentrated in a small number of coefficients when compared to other transforms like the DFT. This is desirable for a compression algorithm; if you can approximately represent the original (time- or spatial-domain) signal using a relatively small set of DCT coefficients, then you can reduce your data storage requirement by only storing the DCT outputs that contain significant amounts of energy. | {
"source": [
"https://dsp.stackexchange.com/questions/13",
"https://dsp.stackexchange.com",
"https://dsp.stackexchange.com/users/79/"
]
} |
65 | Given a recording I need to detect whether any clipping has occurred. Can I safely conclude there was clipping if any (one) sample reaches the maximum sample value, or should I look for a series of subsequent samples at the maximum level? The recording may be taken from 16 or 24-bit A/D converters, and are converted to floating point values ranging from $-1...1$. If this conversion takes the form of a division by $2^{15}-1$ or $2^{23}-1$, then presumably the negative peaks can be somewhat lower than -1, and samples with the value -1 are not clipped? Obviously one can always create a signal specifically to defeat the clipping detection algorithm, but I'm looking at recordings of speech, music, sine waves or pink/white noise. | I was in the middle of typing an answer pretty much exactly like Yoda's . He's is probably the most reliable but, I'll proposed a different solution so you have some options. If you take a histogram of your signal, you will more than likely a bell or triangle like shape depending on the signal type. Clean signals will tend to follow this pattern. Many recording studios add a "loudness" effect that causes a little bump near the top, but it is still somewhat smooth looking. Here is an example from a real song from a major musician: Here is the histogram of signal that Yoda gives in his answer: And now the case of their being clipping: This method can be fooled at times, but it is at least something to throw in your tool bag for situations that the FFT method doesn't seem to be working for you or is too many computations for your environment. | {
"source": [
"https://dsp.stackexchange.com/questions/65",
"https://dsp.stackexchange.com",
"https://dsp.stackexchange.com/users/108/"
]
} |
69 | Everyone discusses the Fourier transform when discussing signal processing. Why is it so important to signal processing and what does it tell us about the signal? Does it only apply to digital signal processing or does it apply to analog signals as well? | This is quite a broad question and it indeed is quite hard to pinpoint why exactly Fourier transforms are important in signal processing. The simplest, hand waving answer one can provide is that it is an extremely powerful mathematical tool that allows you to view your signals in a different domain, inside which several difficult problems become very simple to analyze. Its ubiquity in nearly every field of engineering and physical sciences, all for different reasons, makes it all the more harder to narrow down a reason. I hope that looking at some of its properties which led to its widespread adoption along with some practical examples and a dash of history might help one to understand its importance. History: To understand the importance of the Fourier transform, it is important to step back a little and appreciate the power of the Fourier series put forth by Joseph Fourier. In a nut-shell, any periodic function $g(x)$ integrable on the domain $\mathcal{D}=[-\pi,\pi]$ can be written as an infinite sum of sines and cosines as $$g(x)=\sum_{k=-\infty}^{\infty}\tau_k e^{\jmath k x}$$
$$\tau_k=\frac{1}{2\pi}\int_{\mathcal{D}}g(x)e^{-\jmath k x}\ dx$$ where $e^{\imath\theta}=\cos(\theta)+\jmath\sin(\theta)$. This idea that a function could be broken down into its constituent frequencies (i.e., into sines and cosines of all frequencies) was a powerful one and forms the backbone of the Fourier transform. The Fourier transform: The Fourier transform can be viewed as an extension of the above Fourier series to non-periodic functions. For completeness and for clarity, I'll define the Fourier transform here. If $x(t)$ is a continuous, integrable signal, then its Fourier transform, $X(f)$ is given by $$X(f)=\int_{\mathbb{R}}x(t)e^{-\jmath 2\pi f t}\ dt,\quad \forall f\in\mathbb{R}$$ and the inverse transform is given by $$x(t)=\int_{\mathbb{R}}X(f)e^{\jmath 2\pi f t}\ df,\quad \forall t\in\mathbb{R}$$ Importance in signal processing: First and foremost, a Fourier transform of a signal tells you what frequencies are present in your signal and in what proportions . Example: Have you ever noticed that each of your phone's number buttons
sounds different when you press during a call and that it sounds the same for every phone model? That's because they're each composed of two different sinusoids which can be used to uniquely identify the button. When you use your phone to punch in combinations to navigate a menu, the way that the other party knows what keys you pressed is by doing a Fourier transform of the input and looking at the frequencies present. Apart from some very useful elementary properties which make the mathematics involved simple, some of the other reasons why it has such a widespread importance in signal processing are: The magnitude square of the Fourier transform, $\vert X(f)\vert^2$ instantly tells us how much power the signal $x(t)$ has at a particular frequency $f$. From Parseval's theorem (more generally Plancherel's theorem), we have
$$\int_\mathbb{R}\vert x(t)\vert^2\ dt = \int_\mathbb{R}\vert X(f)\vert^2\ df$$
which means that the total energy in a signal across all time is equal to the total energy in the transform across all frequencies . Thus, the transform is energy preserving. Convolutions in the time domain are equivalent to multiplications in the frequency domain, i.e., given two signals $x(t)$ and $y(t)$, then if $$z(t)=x(t)\star y(t)$$
where $\star$ denotes convolution, then the Fourier transform of $z(t)$ is merely $$Z(f)=X(f)\cdot Y(f)$$ For discrete signals, with the development of efficient FFT algorithms, almost always, it is faster to implement a convolution operation in the frequency domain than in the time domain. Similar to the convolution operation, cross-correlations are also easily implemented in the frequency domain as $Z(f)=X(f)^*Y(f)$, where $^*$ denotes complex conjugate. By being able to split signals into their constituent frequencies, one can easily block out certain frequencies selectively by nullifying their contributions. Example: If you're a football (soccer) fan, you might've been
annoyed at the constant drone of the vuvuzelas that pretty much
drowned all the commentary during the 2010 world cup in South Africa.
However, the vuvuzela has a constant pitch of ~235Hz which made it
easy for broadcasters to implement a notch filter to cut-off the
offending noise. [1] A shifted (delayed) signal in the time domain manifests as a phase change in the frequency domain. While this falls under the elementary property category, this is a widely used property in practice, especially in imaging and tomography applications, Example: When a wave travels through a heterogenous medium, it
slows down and speeds up according to changes in the speed of wave
propagation in the medium. So by observing a change in phase from
what's expected and what's measured, one can infer the excess time
delay which in turn tells you how much the wave speed has changed in
the medium. This is of course, a very simplified layman explanation, but
forms the basis for tomography. Derivatives of signals (n th derivatives too) can be easily calculated (see 106) using Fourier transforms. Digital signal processing (DSP) vs. Analog signal processing (ASP) The theory of Fourier transforms is applicable irrespective of whether the signal is continuous or discrete, as long as it is "nice" and absolutely integrable. So yes, ASP uses Fourier transforms as long as the signals satisfy this criterion. However, it is perhaps more common to talk about Laplace transforms, which is a generalized Fourier transform, in ASP. The Laplace transform is defined as $$X(s)=\int_{0}^{\infty}x(t)e^{-st}\ dt,\quad \forall s\in\mathbb{C}$$ The advantage is that one is not necessarily confined to "nice signals" as in the Fourier transform, but the transform is valid only within a certain region of convergence. It is widely used in studying/analyzing/designing LC/RC/LCR circuits, which in turn are used in radios/electric guitars, wah-wah pedals, etc. This is pretty much all I could think of right now, but do note that no amount of writing/explanation can fully capture the true importance of Fourier transforms in signal processing and in science/engineering | {
"source": [
"https://dsp.stackexchange.com/questions/69",
"https://dsp.stackexchange.com",
"https://dsp.stackexchange.com/users/20/"
]
} |
74 | I've learned about a number of edge detection algorithms, including algorithms like Sobel, Laplacian, and Canny methods. It seems to me the most popular edge detector is a Canny edge detector, but is there cases where this isn't the optimal algorithm to use? How can I decide which algorithm to use? Thanks! | There are lots of edge detection possibilities, but the 3 examples you mention happen to fall in 3 distinct categories. Sobel This approximates a first order derivative. Gives extrema at the gradient positions, 0 where no gradient is present. In 1D, it is = $\left[ \begin{array}{ccc} -1 & 0 & 1 \end{array} \right]$ smooth edge => local minimum or maximum, depending on the signal going up or down. 1 pixel line => 0 at the line itself, with local extrema (of different sign) right next to it. In 1D, it is = $\left[ \begin{array}{ccc} 1 & -2 & 1 \end{array} \right]$ There are other alternatives to Sobel, which have +/- the same characteristics. On the Roberts Cross page on wikipedia you can find a comparison of a few of them. Laplace This approximates a second order derivative. Gives 0 at the gradient positions and also 0 where no gradient is present. It gives extrema where a (longer) gradient starts or stops. smooth edge => 0 along the edge, local extrema at the start/stop of the edge. 1 pixel line => a "double" extremum at the line, with "normal" extrema with a different sign right next to it The effect of these 2 on different types of edges can be best viewed visually: Canny This is not a simple operator, but is a multi-step approach, which uses Sobel as one of the steps. Where Sobel and Laplace give you a grayscale / floating point result, which you need to threshold yourself, the Canny algorithm has smart thresholding as one of its steps, so you just get a binary yes/no result. Also, on a smooth edge, you will likely find just 1 line somewhere in the middle of the gradient. | {
"source": [
"https://dsp.stackexchange.com/questions/74",
"https://dsp.stackexchange.com",
"https://dsp.stackexchange.com/users/88/"
]
} |
134 | Suppose I've got an audio signal sampled at $48000$ Hz, and I'd like to design a low-pass filter that isolates everything below ~$60$Hz. In the digital world, this is a low-pass filter with the passband at $[-\frac{\pi}{400} , \frac{\pi}{400} ] $. Also, the transition band should be reasonable as well. Building a FIR filter for this can have a lot of taps which in the long run affects precision. An IIR filter is too not ideal because audio suffers for non-linear-phase response in filters, so unless the signal is filtered, then reversed and filtered again, it is not really an option. Could a wavelet transform be better at this than in-one-go regular filtering? | If you are optimizing for engineering time and are on a platform that supports large FFTs well (i..e not fixed point), then take hotpaw2's advice and use fast convolution . It will perform much better than a naive FIR implementation and should be relatively easy to implement. On the other hand, if you have some time to spend on this to get the best implementation or are on a fixed-point platform, you should use a multirate down-filter-up-subtract structure. But it's a bit trickier to get everything right. I've got access to trusted and highly optimized implementations of both fast convolution and multirate filtering tools. The fast convolution takes about 3x longer to get equivalent signal performance compared to the multirate structure. Furthermore, that is even on a floating point platform. The gap would widen considerably on a fixed point dsp. In general terms: Down-conversion: Use 8 stages of halfband,decimate-by-2 filters to transform your 48kHz signal into a 187.5 Hz signal. The first stage of this downsampling can have a very wide transition band, allowing energy to alias as long as it doesn't alias back into the sub 60 Hz range. As the stages progress, the number of taps needs to increase, but they will be applied at a progressively lower sampling rates, so the overall cost of each stage remains small. Filtering: Perform your tight filtering around the 60 Hz bw to keep the energy you will eventually want to subtract. There is a double advantage to doing your tight filtering at the low rate: 1Hz of transition bandwidth is 256 times larger in terms of digital frequency at the low rate vs. the original rate. So every tap of your filter is 256 times as powerful. The signal itself is at a lower rate, so the filter only needs to process 1/256 the data. Up-conversion: Essentially, this is the reverse of the decimation stages. Each of the 8 interpolator stages doubles the rate by estimating the sample that goes between consecutive input samples. The transition band gets wider as the sample rate gets higher. Subtract: Subtract your full-rate low-pass filtered signal from the original signal. If you've adjusted for all the group delays properly, the overall structure will be a highpass filter with a narrow transition bandwidth. | {
"source": [
"https://dsp.stackexchange.com/questions/134",
"https://dsp.stackexchange.com",
"https://dsp.stackexchange.com/users/6/"
]
} |
206 | I'm studying some DSP and I'm having trouble understanding the difference between phase delay and group delay . It seems to me that they both measure the delay time of sinusoids passed through a filter. Am I correct in thinking this? If so, how do the two measurements differ? Could someone give an example of a situation in which one measurement would be more useful than the other? UPDATE Reading ahead in Julius Smith's Introduction to Digital Filters , I've found a situation where the two measurements at least give different results: affine-phase filters . That's a partial answer to my question, I guess. | First of all the definitions are different: Phase delay: (the negative of) Phase divided by frequency Group delay: (the negative of) First derivative of phase vs frequency In words that means: Phase delay: Phase angle at this point in frequency Group delay: Rate of change of the phase around this point in frequency. When to use one or the other really depends on your application. The classical application for group delay is modulated sine waves, for example AM radio. The time that it takes for the modulation signal to get through the system is given by the group delay not by the phase delay. Another audio example could be a kick drum: This is mostly a modulated sine wave so if you want to determine how much the kick drum will be delayed (and potentially smeared out in time) the group delay is the way to look at it. | {
"source": [
"https://dsp.stackexchange.com/questions/206",
"https://dsp.stackexchange.com",
"https://dsp.stackexchange.com/users/255/"
]
} |
208 | If one wants to smooth a time series using a window function such as Hanning, Hamming, Blackman etc., what are the considerations for favouring any one window over another? | The two primary factors that describe a window function are: Width of the main lobe (i.e., at what frequency bin is the power half that of the maximum response) Attenuation of the side lobes (i.e., how far away down are the side lobes from the mainlobe). This tells you about the spectral leakage in the window. Another not so frequently considered factor is the rate of attenuation of the sidelobes, i.e., how fast do the sidelobes die down. Here's a quick comparison for four well known window functions: Rectangular, Blackman, Blackman-Harris and Hamming. The curves below are 2048-point FFTs of 64-point windows. You can see that the rectangular function has a very narrow main lobe, but the side lobes are quite high, at ~13 dB. Other filters have significantly fatter main lobes, but fare much better in the side lobe suppression. In the end, it's all a trade-off. You can't have both, you have to pick one. So that said, your choice of window function is highly dependent on your specific needs. For instance, if you're trying to separate/identify two signals that are fairly close in frequency, but similar in strength, then you should choose the rectangular, because it will give you the best resolution. On the other hand, if you're trying to do the same with two different strength signals with differing frequencies, you can easily see how energy from one can leak in through the high sidelobes. In this case, you wouldn't mind one of the fatter main lobes and would trade a slight loss in resolution to be able to estimate their powers more accurately. In seismic and geophysics, it is common to use Slepian windows (or discrete prolate spheroidal wavefunctions, which are the eigenfunctions of a sinc kernel) to maximize the energy concentrated in the main lobe. | {
"source": [
"https://dsp.stackexchange.com/questions/208",
"https://dsp.stackexchange.com",
"https://dsp.stackexchange.com/users/260/"
]
} |
374 | Over on the TeX stackexchange, we have been discussing how to detect "rivers" in paragraphs in this question . In this context, rivers are bands of white space that result from accidental alignment of interword spaces in the text. Since this can be quite distracting to a reader bad rivers are considered to be a symptom of poor typography. An example of text with rivers is this one, where there are two rivers flowing diagonally. There is interest in detecting these rivers automatically, so that they can be avoided (probably by manual editing of the text). Raphink is making some progress at the TeX level (which only knows of glyph positions and bounding boxes), but I feel confident that the best way to detect rivers is with some image processing (since glyph shapes are very important and not available to TeX). I have tried various ways to extract the rivers from the above image, but my simple idea of applying a small amount of ellipsoidal blurring doesn't seem to be good enough. I also tried some Radon Hough transform based filtering, but I didn't get anywhere with those either. The rivers are very visible to the feature-detection circuits of the human eye/retina/brain and somehow I would think this could be translated to some kind of filtering operation, but I am not able to make it work. Any ideas? To be specific, I'm looking for some operation that will detect the 2 rivers in the above image, but not have too many other false positive detections. EDIT: endolith asked why I am pursuing a image-processing-based approach given that in TeX we have access to the glyph positions, spacings, etc, and it might be much faster and more reliable to use an algorithm that examines the actual text. My reason for doing things the other way is that the shape of the glyphs can affect how noticeable a river is, and at the text level it is very difficult to consider this shape (which depends on the font, on ligaturing, etc). For an example of how the shape of the glyphs can be important, consider the following two examples, where the difference between them is that I have replaced a few glyphs with others of almost the same width, so that a text-based analysis would consider them equally good/bad. Note, however, that the rivers in the first example are much worse than in the second. | I have thought about this some more, and think that the following should be fairly stable. Note that I have limited myself to morphological operations, because these should be available in any standard image processing library. (1) Open image with a nPix-by-1 mask, where nPix is about the vertical distance between letters #% read image
img = rgb2gray('http://i.stack.imgur.com/4ShOW.png');
%# threshold and open with a rectangle
%# that is roughly letter sized
bwImg = img > 200; %# threshold of 200 is better than 128
opImg = imopen(bwImg,ones(13,1)); (2) Open image with a 1-by-mPix mask to eliminate whatever is too narrow to be a river. opImg = imopen(opImg,ones(1,5)); (3) Remove horizontal "rivers and lakes" that are due to space between paragraphs, or indentation. For this, we remove all rows that are all true, and open with the nPix-by-1 mask that we know will not affect the rivers we have found previously. To remove lakes, we can use an opening mask that is slightly larger than nPix-by-nPix. At this step, we can also throw out everything that is too small to be a real river, i.e. everything that covers less area than (nPix+2)*(mPix+2)*4 (that will give us ~3 lines). The +2 is there because we know that all objects are at least nPix in height, and mPix in width, and we want to go a little above that. %# horizontal river: just look for rows that are all true
opImg(all(opImg,2),:) = false;
%# open with line spacing (nPix)
opImg = imopen(opImg,ones(13,1));
%# remove lakes with nPix+2
opImg = opImg & ~imopen(opImg,ones(15,15));
%# remove small fry
opImg = bwareaopen(opImg,7*15*4); (4) If we're interested in not only the length, but also the width of the river, we can combine distance transform with skeleton. dt = bwdist(~opImg);
sk = bwmorph(opImg,'skel',inf);
%# prune the skeleton a bit to remove branches
sk = bwmorph(sk,'spur',7);
riversWithWidth = dt.*sk; (colors correspond to width of river (though color bar is off by a factor of 2) Now you can get the approximate length of the rivers by counting the number of pixels in each connected component, and the average width by averaging their pixel values. Here's the exact same analysis applied to the second, "no-river" image: | {
"source": [
"https://dsp.stackexchange.com/questions/374",
"https://dsp.stackexchange.com",
"https://dsp.stackexchange.com/users/371/"
]
} |
386 | I'm reading up on Autocorrelation , but I'm not sure I understand exactly how it works and what output I should expect. Am I right in thinking that I should input my signal to the AC function and have a sliding window input. Each window (of 1024 samples, for example) would output a coefficient between -1 and 1. The sign simply states if the line is upwards or downwards and the value states how strong the correlation is. For simplicity, lets say I don't have an overlap and just move the window 1024 samples each time. In a sample of 44100, would I get 43 coefficients and do I need to keep all of them? Lets say I perform this for a 200 second signal, giving me 8600 coefficients. How would I use these coefficients to detect repetition and, in turn, tempo? Should I create some sort of neural network to group them, or is that overkill? Thanks for any help. | The idea of autocorrelation is to provide a measure of similarity between a signal and itself at a given lag. There are several ways to approach it, but for the purposes of pitch/tempo detection, you can think of it as a search procedure. In other words, you step through the signal sample-by-sample and perform a correlation between your reference window and the lagged window. The correlation at "lag 0" will be the global maximum because you're comparing the reference to a verbatim copy of itself. As you step forward, the correlation will necessarily decrease, but in the case of a periodic signal, at some point it will begin to increase again, then reach a local maximum. The distance between "lag 0" and that first peak gives you an estimate of your pitch/tempo. The way I'm going to describe how to compute it in practice yields something slightly different (i.e. cyclic vs. acyclic mostly), but that's the conceptual basis for how it works. Computing sample-by-sample correlations can be very computationally expensive at high sample rates, so typically an FFT-based approach is used. Taking the FFT of the segment of interest, multiplying it by its complex conjugate , then taking the inverse FFT will give you the cyclic autocorrelation . In code (using numpy ): freqs = numpy.fft.rfft(signal)
autocorr = numpy.fft.irfft(freqs * numpy.conj(freqs)) The effect will be to decrease the amount of noise in the signal (which is uncorrelated with itself) relative to the periodic components (which are similar to themselves by definition). Repeating the autocorrelation (i.e. conjugate multiplication) before taking the inverse transform will reduce the noise even more. Consider the example of a sine wave mixed with white noise. The following plot shows a 440hz sine wave, the same sine wave "corrupted" by noise, the cyclic autocorrelation of the noisy wave, and the double cyclic autocorrelation: Note how the first peak of both autocorrelation signals is located exactly at end of the first cycle of the original signal. That's the peak you're looking for in order to determine the periodicity (pitch in this case). The first autocorrelation signal is still a little "wiggly", so in order to do peak detection, some kind of smoothing would be required. Autocorrelating twice in the frequency domain accomplishes the same thing (and is relatively fast). Note that by "wiggly", I mean how the signal looks when zoomed way in, not the dip that occurs in the center of the plot. The second half of the cyclic autcorrelation will always be the mirror image of the first half, so that kind of "dip" is typical. Just to be clear about the algorithm, here's what the code would look like: freqs = numpy.fft.rfft(signal)
auto1 = freqs * numpy.conj(freqs)
auto2 = auto1 * numpy.conj(auto1)
result = numpy.fft.irfft(auto2) Whether you would need to do more than one autocorrelation depends on how much noise is in the signal. Of course, there are many subtle variations on this idea, and I'm not going to get into all of them here. The most comprehensive coverage I've seen (in the context of pitch detection) is in Digital Processing of Speech Signals by Rabiner and Schafer. Now, as to whether autocorrelation will be sufficient for tempo detection. The answer is yes and no. You can get some tempo information (depending on the source signal), but it may be hard to make sense of what it means in all cases. For example, here's a plot of two loops of a breakbeat, followed by a plot of the cyclic autocorrelation of the entire sequence: For reference, here's the corresponding audio: http://soundcloud.com/datageist/breakbeat-autocorrelation Sure enough, there's a nice spike right in the middle corresponding to the loop point, but it came from processing quite a long segment. On top of that, if it wasn't an exact copy (e.g. if there were instrumentation with it), that spike wouldn't be as clean. Autocorrelation will definitely be useful in tempo detection, but it probably won't be sufficient by itself for complex source material. For example, even if you find a spike, how do you know whether it's a full measure, or quarter note, a half note, or something else? In this case it's clear enough that it's a full measure, but that won't always be the case. I'd suggest playing around with using AC on simpler signals until the inner workings become clear, then asking another question about tempo detection in general (as it's a "bigger" subject). | {
"source": [
"https://dsp.stackexchange.com/questions/386",
"https://dsp.stackexchange.com",
"https://dsp.stackexchange.com/users/334/"
]
} |
411 | I'm working on a simple web app that allows the user to tune his/her guitar. I'm a real beginner in signal processing, so please don't judge me too harshly if my question is inappropriate. So, I managed to get the fundamental frequency using an FFT algorithm and at this point the application is somehow functional. However, there is room for improvement. Right now, I send raw PCM to the FFT algorithm, but I was thinking that maybe there are some pre/post algorithms/filters that may improve the detection. Can you suggest any? My main problem is that when it detects a certain frequency, it shows that frequency for 1-2sec and then jumps to other random frequencies and comes back again and so on, even if the sound is continuous. I'm also interested in any other type of optimization if anyone has experience with such things. | I'm guessing the other frequencies it gets are harmonics of the fundamental? Like you're playing 100 Hz and it picks out 200 Hz or 300 Hz instead? First, you should limit your search space to the frequencies that a guitar is likely to be. Find the highest fundamental you're likely to need and limit to that. Autocorrelation will work better than FFT at finding the fundamental, if the fundamental is lower in amplitude than the harmonics (or missing altogether, but that's not an issue with guitar): You can also try weighting the lower frequencies to emphasize the fundamental and minimize harmonics, or use a peak-picking algorithm like this and then just choose the lowest in frequency. Also, you should be windowing your signal before applying the FFT. You just multiply it by a window function , which tapers off the beginning and end of the waveform to make the frequency spectrum cleaner. Then you get tall narrow spikes for frequency components instead of broad ones. You can also use interpolation to get a more accurate peak. Take the log of the spectrum, then fit a parabola to the peak and the two neighboring points, and find the parabola's true peak. You might not need this much accuracy, though. Here is my example Python code for all of this . | {
"source": [
"https://dsp.stackexchange.com/questions/411",
"https://dsp.stackexchange.com",
"https://dsp.stackexchange.com/users/33/"
]
} |
424 | I've heard that the Hilbert transform can be used to calculate the envelope of a signal. How does this work? And how is this "Hilbert envelope" different from the envelope one gets by simply rectifying a signal? I'm interested specifically in finding a way to calculate an envelope for use in dynamic range compression (i.e., "turning down the volume" of the loud parts of an audio signal automatically). | The Hilbert transform is used to calculate the "analytic" signal. See for example http://en.wikipedia.org/wiki/Analytic_signal . If your signal is a sine wave or an modulated sine wave, the magnitude of the analytic signal will indeed look like the envelope. However, the computation of the Hilbert transform is not trivial. Technically it requires a non-causal FIR filter of considerable length so it will require a fair amount of MIPS, memory and latency. For a broad band signal, it really depends on how you define "envelope" for your specific application. For your application of dynamic range compression you want a metric that is well correlated with the the perception of loudness over time. The Hilbert Transform is not the right tool for that. A better option would be to apply an A-weighted filter ( http://en.wikipedia.org/wiki/A-weighting ) and then do a lossy peak or lossy RMS detector. This will correlate fairly well with perceived loudness over time and is relatively cheap to do. | {
"source": [
"https://dsp.stackexchange.com/questions/424",
"https://dsp.stackexchange.com",
"https://dsp.stackexchange.com/users/255/"
]
} |
427 | What are some recommended resources (books, tutorials, lectures, etc.) on digital signal processing, and how to begin working with it on a technical level? | My recommendation in terms of text books is Rick Lyons's Understanding DSP . My review of the latest edition is here . I, and many others from the ${\tt comp.dsp}$ community and elsewhere, have helped Rick revise parts of the text since the first edition. For self-study, I know of no better book. As an on-line, free resource, I recommend Steve Smith's book . Personally, I prefer Rick's style, but Steve's book as the advantage of online accessibility (and the online version is free!). Edit: Rick sent me some feedback that I thought I'd share here: For your colleagues that have a copy of my DSP book,
I'll be happy to send them the errata for my book. All they have
to do is send me an E-mail telling me (1) The Edition Number,
and (2) the Printing Number of their copy of the book. The Printing
Number can be found on the page just before the 'Dedication' page.
My E-mail address is:
R.Lyons [at] ieee.org I recommend that your colleagues have a look at: http://www.redcedar.com/learndsp.htm Rick also gave me a long list of online DSP references. There are way too many to put here. I will see about setting up a GoogleDocs version and re-post here later. | {
"source": [
"https://dsp.stackexchange.com/questions/427",
"https://dsp.stackexchange.com",
"https://dsp.stackexchange.com/users/439/"
]
} |
Subsets and Splits