URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
http://acs.polsl.pl/index.php?mode=2&show=71
[ "ACS Abstract:\n\n2017 (Volume 27)\nNumber 1\n 1 Rational taxation in an open access fishery model 2 State estimation in  a decentralized discrete time LQG  control  for a  multisensor system 3 Robust  H∞ output feedback control of bidirectional inductive power transfer systems 4 An adaptive control scheme for hyperbolic partial differential equation  system (drilling system) with unknown coefficient 5 Pointwise observation of the state given by complex time lag parabolic system 6 New perspectives of analog and digital simulations of fractional order systems 7 Eigenvalue assignment in fractional descriptor discrete-time linear systems\n\nRational taxation in an open access fishery model", null, "D.B. Rokhlin, A. Usov(Southern Federal University, Russia)\n\nWe consider a model of fishery management, where n agents exploit a single population with strictly concave continuously differentiable growth function of Verhulst type. If the agent actions are coordinated and directed towards the maximization of the discounted cooperative revenue, then the biomass stabilizes at the level, defined by the well known ‘golden rule’. We show that for independent myopic harvesting agents such optimal (or ε-optimal) cooperative behavior can be stimulated by the proportional tax, depending on the resource stock, and equal to the marginal value function of the cooperative problem. To implement this taxation scheme we prove that the mentioned value function is strictly concave and continuously differentiable, although the instantaneous individual revenues may be neither concave nor differentiable.\n\nkeywords: marginal value function, stimulating taxes, myopic agents, optimal control\n\nState estimation in  a decentralized discrete time LQG  control  for a  multisensor system", null, "Z. Duda(Silesian University of Technology, Poland)\n\nIn the paper a state filtration in a decentralized discrete time Linear Quadratic Gaussian problem formulated for a multisensor system is considered. Local optimal control laws depend on global state estimates and are calculated by each node. In a classical centralized information pattern the global state estimators use measurements data from all nodes. In a decentralized system the global state estimates are computed at each node using local state estimates based on local measurements and values of previous controls, from other nodes.\n\nIn the paper, contrary to this, the controls are not transmitted between nodes. It leads to nonconventional filtration because the controls from other nodes are treated as random variables for each node. The cost for the additional reduced transmission is an increased filter computation at each node.\n\nkeywords: multisensor system, LQG problem, Kalman filter\n\nRobust  H output feedback control of bidirectional inductive power transfer systems", null, "A. Swain, M.J. Neath(The University of Auckland, New Zealand) D. Almakhles(Prince Sultan University, Saudi Arabia)\n\nBidirectional Inductive power transfer (IPT) systems behave as high order resonant networks and hence are highly sensitive to changes in system parameters. Traditional PID controllers often fail to maintain satisfactory power regulation in the presence of parametric uncertainties. To overcome these problems, this paper proposes a robust controller which is designed using linear matrix inequality (LMI) techniques. The output sensitivity to parametric uncertainty is explored and a linear fractional transformation of the nominal model and its uncertainty is discussed to generate a standard configuration for µ-synthesis and LMI analysis. An H controller is designed based on the structured singular value and LMI feasibility analysis with regard to uncertainties in the primary tuning capacitance, the primary and pickup inductors and the mutual inductance. Robust stability and robust performance of the system is studied through µ-synthesis and LMI feasibility analysis. Simulations and experiments are conducted to verify the power regulation performance of the proposed controller.\n\nkeywords: inductive power transfer, wireless power transfer, robust control, Linear Matrix Inequalities, sensitivity analysis\n\nAn adaptive control scheme for hyperbolic partial differential equation  system (drilling system) with unknown coefficient", null, "H.S. Farahani, H.A. Talebi, M. BagherMenhaj(Amirkabir University of Technology, Iran)\n\nThe adaptive boundary stabilization is investigated for a class of systems described by second-order hyperbolic PDEs with unknown coefficient. The proposed control scheme utilizes only measurement on top boundary and assumes anti-damping dynamics on the opposite boundary which is the main feature of our work. To cope with the lack of full state measurements, we introduce Riemann variables which allow us reformulate the second-order in time hyperbolic PDE as a system with linear input-delay dynamics. Then, the infinite-dimensional time-delay tools are employed to design the controller. Simulation results which applied on mathematical model of drilling system are given to demonstrate the effectiveness of the proposed control approach.\n\nPointwise observation of the state given by complex time lag parabolic system", null, "A. Kowalewski(AGH University of Science and Technology, Poland)\n\nVarious optimization problems for linear parabolic systems with multiple constant time lags are considered. In this paper, we consider an optimal distributed control problem for a linear complex parabolic system in which different multiple constant time lags appear both in the state equation and in the Neumann boundary condition. Sufficient conditions for the existence of a unique solution of the parabolic time lag equation with the Neumann boundary condition are proved. The time horizon T is fixed. Making use of the Lions scheme, necessary and sufficient conditions of optimality for the Neumann problem with the quadratic performance functional with pointwise observation of the state and constrained control are derived. The example of application is also provided.\n\nkeywords: distributed control, parabolic system, time lags, pointwise observation\n\nNew perspectives of analog and digital simulations of fractional order systems", null, "A. Charef(Université des Frères Mentouri - Constantine, Algeria)\n\nIn the recent decades, fractional order systems have been found to be useful in many areas of physics and engineering. Hence, their efficient and accurate analog and digital simulations and numerical calculations have become very important especially in the fields of fractional control, fractional signal processing and fractional system identification. In this article, new analog and digital simulations and numerical calculations perspectives of fractional systems are considered. The main feature of this work is the introduction of an adjustable fractional order structure of the fractional integrator to facilitate and improve the simulations of the fractional order systems as well as the numerical resolution of the linear fractional order differential equations. First, the basic ideas of the proposed adjustable fractional order structure of the fractional integrator are presented. Then, the analog and digital simulations techniques of the fractional order systems and the numerical resolution of the linear fractional order differential equation are exposed. Illustrative examples of each step of this work are presented to show the effectiveness and the efficiency of the proposed fractional order systems analog and digital simulations and implementations techniques.\n\nkeywords: adjustable fractional operators, Charef approximation, fractional differential equation, fractional integrator, fractional systems\n\nEigenvalue assignment in fractional descriptor discrete-time linear systems", null, "T. Kaczorek, K. Borawski(Bialystok University of Technology, Poland)\n\nThe problem of eigenvalue assignment in fractional descriptor discrete-time linear systems is considered. Necessary and sufficient conditions for the existence of a solution to the problem are established. A procedure for computation of the gain matrices is given and illustrated by a numerical example.\n\nkeywords: eigenvalue assignment, fractional, descriptor, discrete-time linear system, gain matrix\n\n<< Back" ]
[ null, "http://acs.polsl.pl/img/pdf.png", null, "http://acs.polsl.pl/img/pdf.png", null, "http://acs.polsl.pl/img/pdf.png", null, "http://acs.polsl.pl/img/pdf.png", null, "http://acs.polsl.pl/img/pdf.png", null, "http://acs.polsl.pl/img/pdf.png", null, "http://acs.polsl.pl/img/pdf.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86325777,"math_prob":0.92170686,"size":9198,"snap":"2021-43-2021-49","text_gpt3_token_len":1783,"char_repetition_ratio":0.14041767,"word_repetition_ratio":0.031906616,"special_character_ratio":0.16568819,"punctuation_ratio":0.11050847,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.985874,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T20:21:06Z\",\"WARC-Record-ID\":\"<urn:uuid:1559ce63-fa87-40f5-8703-1162b3e3d755>\",\"Content-Length\":\"127813\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e35e7854-6f24-4c5b-ba14-87fa6a23a17e>\",\"WARC-Concurrent-To\":\"<urn:uuid:94cb8d89-d9c9-446e-8189-8b3548490abc>\",\"WARC-IP-Address\":\"157.158.12.7\",\"WARC-Target-URI\":\"http://acs.polsl.pl/index.php?mode=2&show=71\",\"WARC-Payload-Digest\":\"sha1:HNYRHJC7D6ENXYUCKNJ66EKFNSRFCHOF\",\"WARC-Block-Digest\":\"sha1:ECJWJ2KM6TCDMVVVX4HDLUF6YDLF5XJC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585348.66_warc_CC-MAIN-20211020183354-20211020213354-00140.warc.gz\"}"}
https://mc.ai/pytorch-tabular-multiclass-classification/
[ "# PyTorch [Tabular] —Multiclass Classification\n\nOriginal article can be found here (source): Deep Learning on Medium", null, "# PyTorch [Tabular] —Multiclass Classification\n\n## This blog post takes you through an implementation of multi-class classification on tabular data using PyTorch.\n\nWe will use the wine dataset available on Kaggle. This dataset has 12 columns where the first 11 are the features and the last column is the target column. The data set has 1599 rows.\n\n# Import Libraries\n\nWe’re using `tqdm` to enable progress bars for training and testing loops.\n\n`import numpy as npimport pandas as pdimport seaborn as snsfrom tqdm.notebook import tqdmimport matplotlib.pyplot as pltimport torchimport torch.nn as nnimport torch.optim as optimfrom torch.utils.data import Dataset, DataLoader, WeightedRandomSamplerfrom sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_splitfrom sklearn.metrics import confusion_matrix, classification_report`\n\n`df = pd.read_csv(\"data/tabular/classification/winequality-red.csv\")df.head()`\n\n# EDA and Preprocessing\n\nTo make the data fit for a neural net, we need to make a few adjustments to it.\n\n## Class Distribution\n\nFirst off, we plot the output rows to observe the class distribution. There’s a lot of imbalance here. Classes 3, 4, and 8 have a very few number of samples.\n\n`sns.countplot(x = 'quality', data=df)`\n\n## Encode Output Class\n\nNext, we see that the output labels are from 3 to 8. That needs to change because PyTorch supports labels starting from 0. That is [0, n]. We need to remap our labels to start from 0.\n\nTo do that, let’s create a dictionary called `class2idx` and use the `.replace()` method from the Pandas library to change it. Let’s also create a reverse mapping called `idx2class` which converts the IDs back to their original classes.\n\nTo create the reverse mapping, we create a dictionary comprehension and simply reverse the key and value.\n\n`class2idx = { 3:0, 4:1, 5:2, 6:3, 7:4, 8:5}idx2class = {v: k for k, v in class2idx.items()}df['quality'].replace(class2idx, inplace=True)`\n\n## Create Input and Output Data\n\nIn order to split our data into train, validation, and test sets using `train_test_split` from Sklearn, we need to separate out our inputs and outputs.\n\nInput `X` is all but the last column. Output `y` is the last column.\n\n`X = df.iloc[:, 0:-1]y = df.iloc[:, -1]`\n\n## Train — Validation — Test\n\nTo create the train-val-test split, we’ll use `train_test_split()` from Sklearn.\n\nFirst we’ll split our data into train+val and test sets. Then, we’ll further split our train+val set to create our train and val sets.\n\nBecause there’s a class imbalance, we want to have equal distribution of all output classes in our train, validation, and test sets. To do that, we use the `stratify` option in function `train_test_split()`.\n\n`# Split into train+val and testX_trainval, X_test, y_trainval, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=69)# Split train into train-valX_train, X_val, y_train, y_val = train_test_split(X_trainval, y_trainval, test_size=0.1, stratify=y_trainval, random_state=21)`\n\n## Normalize Input\n\nNeural networks need data that lies between the range of (0,1). There’s a ton of material available online on why we need to do it.\n\nTo scale our values, we’ll use the `MinMaxScaler()` from Sklearn. The `MinMaxScaler` transforms features by scaling each feature to a given range which is (0,1) in our case.\n\nx_scaled = (x-min(x)) / (max(x)–min(x))\n\nNotice that we use `.fit_transform()` on `X_train` while we use `.transform()` on `X_val` and `X_test`.\n\nWe do this because we want to scale the validation and test set with the same parameters as that of the train set to avoid data leakage. `fit_transform` calculates scaling values and applies them while `.transform` only applies the calculated values.\n\n`scaler = MinMaxScaler()X_train = scaler.fit_transform(X_train)X_val = scaler.transform(X_val)X_test = scaler.transform(X_test)X_train, y_train = np.array(X_train), np.array(y_train)X_val, y_val = np.array(X_val), np.array(y_val)X_test, y_test = np.array(X_test), np.array(y_test)`\n\n## Visualize Class Distribution in Train, Val, and Test\n\nOnce we’ve split our data into train, validation, and test sets, let’s make sure the distribution of classes is equal in all three sets.\n\nTo do that, let’s create a function called `get_class_distribution()` . This function takes as input the obj `y` , ie. `y_train`, `y_val`, or `y_test`. Inside the function, we initialize a dictionary which contains the output classes as keys and their count as values. The counts are all initialized to 0.\n\nWe then loop through our `y` object and update our dictionary.\n\n`def get_class_distribution(obj): count_dict = { \"rating_3\": 0, \"rating_4\": 0, \"rating_5\": 0, \"rating_6\": 0, \"rating_7\": 0, \"rating_8\": 0, } for i in obj: if i == 0: count_dict['rating_3'] += 1 elif i == 1: count_dict['rating_4'] += 1 elif i == 2: count_dict['rating_5'] += 1 elif i == 3: count_dict['rating_6'] += 1 elif i == 4: count_dict['rating_7'] += 1 elif i == 5: count_dict['rating_8'] += 1 else: print(\"Check classes.\") return count_dict`\n\nOnce we have the dictionary count, we use Seaborn library to plot the bar charts. The make the plot, we first convert our dictionary to a dataframe using `pd.DataFrame.from_dict([get_class_distribution(y_train)])` . Subsequently, we `.melt()` our convert our dataframe into the long format and finally use `sns.barplot()` to build the plots.\n\n`fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(25,7))# Trainsns.barplot(data = pd.DataFrame.from_dict([get_class_distribution(y_train)]).melt(), x = \"variable\", y=\"value\", hue=\"variable\", ax=axes).set_title('Class Distribution in Train Set')# Validationsns.barplot(data = pd.DataFrame.from_dict([get_class_distribution(y_val)]).melt(), x = \"variable\", y=\"value\", hue=\"variable\", ax=axes).set_title('Class Distribution in Val Set')# Testsns.barplot(data = pd.DataFrame.from_dict([get_class_distribution(y_test)]).melt(), x = \"variable\", y=\"value\", hue=\"variable\", ax=axes).set_title('Class Distribution in Test Set')`\n\n# Neural Network\n\nWe’ve now reached what we all had been waiting for!\n\n## Custom Dataset\n\nFirst up, let’s define a custom dataset. This dataset will be used by the dataloader to pass our data into our model.\n\nWe initialize our dataset by passing X and y as inputs. Make sure X is a `float` while y is `long`.\n\n`class ClassifierDataset(Dataset): def __init__(self, X_data, y_data): self.X_data = X_data self.y_data = y_data def __getitem__(self, index): return self.X_data[index], self.y_data[index] def __len__ (self): return len(self.X_data)train_dataset = ClassifierDataset(torch.from_numpy(X_train).float(), torch.from_numpy(y_train).long())val_dataset = ClassifierDataset(torch.from_numpy(X_val).float(), torch.from_numpy(y_val).long())test_dataset = ClassifierDataset(torch.from_numpy(X_test).float(), torch.from_numpy(y_test).long())`\n\n## Weighted Sampling\n\nBecause there’s a class imbalance, we use stratified split to create our train, validation, and test sets.\n\nWhile it helps, it still does not ensure that each mini-batch of our model see’s all our classes. We need to over-sample the classes with less number of values. To do that, we use the `WeightedRandomSampler`.\n\nFirst, we obtain a list called `target_list` which contains all our outputs. This list is then converted to a tensor and shuffled.\n\n`target_list = []for _, t in train_dataset: target_list.append(t)target_list = torch.tensor(target_list)target_list = target_list[torch.randperm(len(target_list))]`\n\nThen, we obtain the count of all classes in our training set. We use the reciprocal of each count to obtain it’s weight. Now that we’ve calculated the weights for each class, we can proceed.\n\n`class_count = [i for i in get_class_distribution(y_train).values()]class_weights = 1./torch.tensor(class_count, dtype=torch.float) print(class_weights)###################### OUTPUT ######################tensor([0.1429, 0.0263, 0.0020, 0.0022, 0.0070, 0.0714])`\n\n`WeightedRandomSampler` expects a weight for each sample. We do that using as follows.\n\n`class_weights_all = class_weights[target_list]`\n\nFinally, let’s initialize our `WeightedRandomSampler`. We’ll call this in our dataloader below.\n\n`weighted_sampler = WeightedRandomSampler( weights=class_weights_all, num_samples=len(class_weights_all), replacement=True)`\n\n## Model Parameters\n\nBefore we proceed any further, let’s define a few parameters that we’ll use down the line.\n\n`EPOCHS = 400BATCH_SIZE = 64LEARNING_RATE = 0.001NUM_FEATURES = len(X.columns)NUM_CLASSES = 6`\n\nFor `train_dataloader `we’ll use `batch_size = 64` and pass our sampler to it. Note that we’re not using `shuffle=True` in our `train_dataloader` because we’re already using a sampler. These two are mutually exclusive.\n\nFor `test_dataloader` and `val_dataloader` we’ll use `batch_size = 1` .\n\n`train_loader = DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, sampler=weighted_sampler)val_loader = DataLoader(dataset=val_dataset, batch_size=1)test_loader = DataLoader(dataset=test_dataset, batch_size=1)`\n\n## Define Neural Net Architecture\n\nLet’s define a simple 3-layer feed-forward network with dropout and batch-norm.\n\n`class MulticlassClassification(nn.Module): def __init__(self, num_feature, num_class): super(MulticlassClassification, self).__init__() self.layer_1 = nn.Linear(num_feature, 512) self.layer_2 = nn.Linear(512, 128) self.layer_3 = nn.Linear(128, 64) self.layer_out = nn.Linear(64, num_class) self.relu = nn.ReLU() self.dropout = nn.Dropout(p=0.2) self.batchnorm1 = nn.BatchNorm1d(512) self.batchnorm2 = nn.BatchNorm1d(128) self.batchnorm3 = nn.BatchNorm1d(64) def forward(self, x): x = self.layer_1(x) x = self.batchnorm1(x) x = self.relu(x) x = self.layer_2(x) x = self.batchnorm2(x) x = self.relu(x) x = self.dropout(x) x = self.layer_3(x) x = self.batchnorm3(x) x = self.relu(x) x = self.dropout(x) x = self.layer_out(x) return x`\n\nCheck if GPU is active.\n\n`device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")print(device)###################### OUTPUT ######################cuda:0`\n\nInitialize the model, optimizer, and loss function. Transfer the model to GPU. We’re using the `nn.CrossEntropyLoss` because this is a multiclass classification problem. We don’t have to manually apply a `log_softmax` layer after our final layer because `nn.CrossEntropyLoss` does that for us. However, we need to apply `log_softmax` for our validation and testing.\n\n`model = MulticlassClassification(num_feature = NUM_FEATURES, num_class=NUM_CLASSES)model.to(device)criterion = nn.CrossEntropyLoss(weight=class_weights.to(device))optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)print(model)###################### OUTPUT ######################MulticlassClassification( (layer_1): Linear(in_features=11, out_features=512, bias=True) (layer_2): Linear(in_features=512, out_features=128, bias=True) (layer_3): Linear(in_features=128, out_features=64, bias=True) (layer_out): Linear(in_features=64, out_features=6, bias=True) (relu): ReLU() (dropout): Dropout(p=0.2, inplace=False) (batchnorm1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (batchnorm2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (batchnorm3): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))`\n\n# Train the model\n\nBefore we start our training, let’s define a function to calculate accuracy per epoch.\n\nThis function takes `y_pred` and `y_test` as input arguments. We then apply `log_softmax` to `y_pred` and extract the class which has a higher probability.\n\nAfter that, we compare the the predicted classes and the actual classes to calculate the accuracy.\n\n`def multi_acc(y_pred, y_test): y_pred_softmax = torch.log_softmax(y_pred, dim = 1) _, y_pred_tags = torch.max(y_pred_softmax, dim = 1) correct_pred = (y_pred_tags == y_test).float() acc = correct_pred.sum() / len(correct_pred) acc = torch.round(acc) * 100 return acc`\n\nWe’ll also define 2 dictionaries which will store the accuracy/epoch and loss/epoch for both train and validation sets.\n\n`accuracy_stats = { 'train': [], \"val\": []}loss_stats = { 'train': [], \"val\": []}`\n\nLet’s TRAAAAAIN our model!\n\n`print(\"Begin training.\")for e in tqdm(range(1, EPOCHS+1)): # TRAINING train_epoch_loss = 0 train_epoch_acc = 0model.train() for X_train_batch, y_train_batch in train_loader: X_train_batch, y_train_batch = X_train_batch.to(device), y_train_batch.to(device) optimizer.zero_grad() y_train_pred = model(X_train_batch) train_loss = criterion(y_train_pred, y_train_batch) train_acc = multi_acc(y_train_pred, y_train_batch) train_loss.backward() optimizer.step() train_epoch_loss += train_loss.item() train_epoch_acc += train_acc.item() # VALIDATION with torch.no_grad(): val_epoch_loss = 0 val_epoch_acc = 0 model.eval() for X_val_batch, y_val_batch in val_loader: X_val_batch, y_val_batch = X_val_batch.to(device), y_val_batch.to(device) y_val_pred = model(X_val_batch) val_loss = criterion(y_val_pred, y_val_batch) val_acc = multi_acc(y_val_pred, y_val_batch) val_epoch_loss += train_loss.item() val_epoch_acc += train_acc.item()loss_stats['train'].append(train_epoch_loss/len(train_loader)) loss_stats['val'].append(val_epoch_loss/len(val_loader)) accuracy_stats['train'].append(train_epoch_acc/len(train_loader)) accuracy_stats['val'].append(val_epoch_acc/len(val_loader)) print(f'Epoch {e+0:03}: | Train Loss: {train_epoch_loss/len(train_loader):.5f} | Val Loss: {val_epoch_loss/len(val_loader):.5f} | Train Acc: {train_epoch_acc/len(train_loader):.3f}| Val Acc: {val_epoch_acc/len(val_loader):.3f}')###################### OUTPUT ######################Epoch 001: | Train Loss: 1.55731 | Val Loss: 1.48898 | Train Acc: 5.556| Val Acc: 0.000Epoch 002: | Train Loss: 1.55930 | Val Loss: 1.27569 | Train Acc: 50.000| Val Acc: 100.000...Epoch 399: | Train Loss: 0.11390 | Val Loss: 0.10750 | Train Acc: 100.000| Val Acc: 100.000Epoch 400: | Train Loss: 0.11665 | Val Loss: 0.07421 | Train Acc: 100.000| Val Acc: 100.000`\n\nYou can see we’ve put a `model.train()` at the before the loop. `model.train()` tells PyTorch that you’re in training mode.\n\nWell, why do we need to do that? If you’re using layers such as `Dropout` or `BatchNorm` which behave differently during training and evaluation (for example; not use dropout during evaluation), you need to tell PyTorch to act accordingly.\n\nSimilarly, we’ll call `model.eval()` when we test our model. We’ll see that below.\n\nBack to training; we start a for-loop. At the top of this for-loop, we initialize our loss and accuracy per epoch to 0. After every epoch, we’ll print out the loss/accuracy and reset it back to 0.\n\nThen we have another for-loop. This for-loop is used to get our data in batches from the `train_loader`.\n\nWe do `optimizer.zero_grad()` before we make any predictions. Since the `backward()` function accumulates gradients, we need to set it to 0 manually per mini-batch.\n\nFrom our defined model, we then obtain a prediction, get the loss(and accuracy) for that mini-batch, perform back-propagation using `loss.backward()` and `optimizer.step()` .\n\nFinally, we add all the mini-batch losses (and accuracies) to obtain the average loss (and accuracy) for that epoch. We add up all the losses/accuracies for each mini-batch and finally divide it by the number of mini-batches ie. length of `train_loader` to obtain the average loss/accuracy per epoch.\n\nThe procedure we follow for training is the exact same for validation except for the fact that we wrap it up in `torch.no_grad` and not perform any back-propagation. `torch.no_grad()` tells PyTorch that we do not want to perform back-propagation, which reduces memory usage and speeds up computation.\n\n# Visualize Loss and Accuracy\n\nTo plot the loss and accuracy line plots, we again create a dataframe from the `accuracy_stats` and `loss_stats` dictionaries.\n\n`# Create dataframestrain_val_acc_df = pd.DataFrame.from_dict(accuracy_stats).reset_index().melt(id_vars=['index']).rename(columns={\"index\":\"epochs\"})train_val_loss_df = pd.DataFrame.from_dict(loss_stats).reset_index().melt(id_vars=['index']).rename(columns={\"index\":\"epochs\"})# Plot the dataframesfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20,7))sns.lineplot(data=train_val_acc_df, x = \"epochs\", y=\"value\", hue=\"variable\", ax=axes).set_title('Train-Val Accuracy/Epoch')sns.lineplot(data=train_val_loss_df, x = \"epochs\", y=\"value\", hue=\"variable\", ax=axes).set_title('Train-Val Loss/Epoch')`" ]
[ null, "https://miro.medium.com/max/1000/1*84ZAH8EYRxRq_8cds5-m9w.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6368253,"math_prob":0.975446,"size":16359,"snap":"2020-10-2020-16","text_gpt3_token_len":4207,"char_repetition_ratio":0.11562213,"word_repetition_ratio":0.0246139,"special_character_ratio":0.27911243,"punctuation_ratio":0.19191587,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981891,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-31T01:12:40Z\",\"WARC-Record-ID\":\"<urn:uuid:cf1559d6-5af1-43fd-a97a-9cb8a879c675>\",\"Content-Length\":\"63033\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3164a62b-b4c7-4819-b24c-326bec931547>\",\"WARC-Concurrent-To\":\"<urn:uuid:194e6d04-a875-45fd-a252-e94ffff80554>\",\"WARC-IP-Address\":\"149.126.4.86\",\"WARC-Target-URI\":\"https://mc.ai/pytorch-tabular-multiclass-classification/\",\"WARC-Payload-Digest\":\"sha1:ZTXSCL3WNVG7OZTA65OUAQ7QL2YHUBFT\",\"WARC-Block-Digest\":\"sha1:I24USNOJAURBZKG7CNK35AGNWMC6BI6D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370499280.44_warc_CC-MAIN-20200331003537-20200331033537-00500.warc.gz\"}"}
https://vfpun.ha3.in.net/equations-with-variables-on-both-sides.html
[ "Equations with variables on both sides\n\n# Equations with variables on both sides\n\n1: Create equations…in one variable and use them to solve problemsWe’ve already got all the constants on the left side, so we’ll move the variables to the right side by subtracting 9x with variables on both sides\n\nYou may have noticed that in all the equations we have solved so far, we had variables on only one side of the equation\n\nEquations can have variables on both sideslip balm natural ingredientsI tried to solve it as follows: \\$\\$-x \\le 3-5x \\le x\\$\\$ Then I solved for each side separately,as follows: \\$\\$3-5x \\le x\\$\\$ \\$\\$ x \\ge (1/2)\\$\\$ View 2­12 = 8 + 4b Section 1\n\nThis can be accomplished by using the addition or subtraction principle of equalitySolve the equation 10x + 14 = −2x + 38, explaining all the steps of your solutionSteps for Solving Linear EquationsThe LCM of 3 and 5 is 15\n\nWhen the Variable is on Both Sides of the EquationA brief warm up reviewing the solving of equations with variables on both sidesIf the digit in the tens place is 1 less than the digit in the units place, find the number\n\nView Notes - Variables on Both Sides (ToC Page 5) from PRE-ALGEBR Pre-Algebr at Morrisville Middle School High SchoolI want to isolate the variable, but I see one in two places\n\nSolving Equations with Variables on Both SidesThey love their privacyPost by on May 14, 2019 in How to solve equations with variables on both sides fractionsUnit 1: Solving Equations Steps to Solving Equations - Example Equations with Variables on Both Sides Use the 7 Solving Equations Steps to solve the following equation: ⅔(x+3) +2 = 3x + ⅓ (3x -4) Step 1: Does your equation have fractions? YES – Multiply EVERY term in the equation (on both sides) by the denominator\n\nIf you have identified\n\nCombine any like terms in the equation (do not cross the =) 3A simple problem: ACTIVITY 3then when variable is alone the number on other side is the answerThe more complicated your equation is, the more important it is to check your answer5 Linear Equations and Problem Solving 3Equations may have variables on both sides\n\nAs a customer service supervisor or a customer service manager, you'll typically experience a significant increase in responsibility, along withTherefore, we use the expression 0\n\nLinear Equations with Variables on Both Sides Worksheet5 2 12 2 2 3 12 3 12 Oct 20, 2016 · Variable on both sides equations passy s world of mathematics how to solve multi step equations with variables on both fraction distributive property calculator how to solve equations on a calculator lesson Variable On Both Sides Equations Passy S World Of Mathematics How To Solve Multi Step Equations With Variables On Both Fraction Distributive Property Calculator How To… Join Karin Hutchinson for an in-depth discussion in this video, Solution: Solve equations with variables on both sides, part of Learning Algebra: Solving Equations\n\nApart from the stuff given above, if you want to know more about \"Solving equations with variables on both sides worksheet with answer key\", please click here Practice of solving equations with variables on both sides of the equationchesapeake shores season 1 episode 1Example 1: Solve for x in the equation\n\n7 Formulas and Functions 3pin the sperm on the eggThere are many real world problems that can be solved by this type of equation\n\nThe algebraic expressions on either side may need to bej + 4j = 20 - 4j + 4j\n\nGet all constants on the other side of the equation- In our next lesson, we're going to be solving equations…that have variables on both sides of the equation8 Rates, Ratios, and Percents Ready-to-use mathematics resources for Key Stage 3, Key Stage 4 and GCSE maths classes\n\nSolving an Equation with the Variable on Both Sides If there are variable terms on both sides of an equation, first collect them on one side\n\n…Let's take a look at an example of an equation…with variables on both sidesSolving Equations With Variables Both Sides Worksheets Free from Equations With Variables On Both Sides Worksheets, source:comprar-en-internetage of empires product keysHowever, tody everyone Zombie Flip Book Do everything right and you just might surviveSome of the worksheets displayed are Solving linear equations variable on both sides, Kuta equations with variables on both sides, Solving equationsvariables on both, Solving equations with variables on both sides of the, Solving linear equations, Work 2 23x + 2 = 4x - 1 You need to get the variables on one side of the equation\n\n©E 2j0 R104p sK NuNt 3aT cS GoCfBtsw xa ZrQeH SLSLoC IWonderful Algebraic Equations With Variables Both Sides from Equations With Variables On Both Sides Worksheets, source:dutaproNotice that both sides of the equation have a term with the variable\n\nNov 22, 2019 · To solve systems of algebraic equations containing two variables, start by moving the variables to different sides of the equation2016 lexus is 350 for saleThis skill is important as allows us to compare, for example, rental rates for cars, or billing rates of two different phoneStudents learn to solve equations with the variable on both sidesService Guide means the document containing the Service descriptions that are available under this Agreement and the proposed guidelines\n\n…Let's take a look at an example of an equation…with variables on both sidesEach denominator will then divide into its multiple\n\nThis clears the fractionsNov 22, 2013 · To solve equations with the variable on both sides, move all variables to one side or anotherJan 02, 2019 · Solving Equations with Variables On Both Sides Worksheet Answers – Start customizing it immediately and you may also double-click on the template thumbnail to open it on your document window If you find a template that you want to use! See Prentice Hall's Mathematics Offerings at: ://wwwTo solve these equations, use inverse operations to get the variable terms on one side of the equationIn this lesson we will solve equations with variables on both sides of the equal sign and identify equations that are identities or have no solutionSolve showing algebraic steps\n\nWill averages 18 points a game and is the all-time scoring leader on his team with 483 pointsDirections: Use the digits 1 to 9, at most TWO times each, to fill in the boxes to make an equation with no solutionsKarin Hutchinson starts with one-step addition, subtraction, multiplication, and division equations, then progresses to multistep equations and equations featuring fractions and decimals\n\nMultiply both sides of the equation -- every term -- by the LCM of denominatorsExample 1: Solve for x in the equationall in one washer dryer combocom provides useful material on equations with variables on both sides using fractions with unlike denominators calculator, exponential and logarithmic and powers and other math subjectsTo solve an equation that has variables on both sides, use the Addition or Subtraction Properties of Equality to get3 Solving Equations with Variables on Both Sides 19 Work with a partner3) add/subtract numbers next to a variable on both sides\n\nIf we have an equation with variables on both sides of the equality, we need to do the next procedure to find the root: transpose elements, solveInfinite solutions would mean that any value for the variable would make the equation trueThe more complicated your equation is, the more important it is to check your answerA quantity with a variable can be treated just like a quantity without variables -- a quantity with a variable follows all the rules learned in the last two\n\nTo do this solve the inverse operation of the term with the variable\n\nIf we have an equation with variables on both sides of the equality, we need to do the next procedure to find the root: transpose elements, solve6 Solving Decimal Equations 3Start studying Solving equations with variables on both sides\n\nMarch 28, 2019 March 28, 2019 Craig Barton\n\nClick here to get the accompanying notes, practice exercises, and assessmentsa N TAJl OlQ 4r QiXgvh Vt3sM er3e7sEear ivre Ydj3 I will demonstrate how an equation with variables on both sides, both with and without Oct 19, 2013 · Solve the Equation : 3n + 5 = 2n + 7 We cannot solve this Equation using “Onion Skins” or “Back-Tracking”, because our Variable letter “n” is on both sides of the Equation\n\nSolving Equations With Variables on Both Sides VideoWhat is the value of x? 3Some equations are always false\n\nNov 22, 2019 · To solve systems of algebraic equations containing two variables, start by moving the variables to different sides of the equationLet’s cancel out the lowest variable term\n\nWorksheets are Solving linear equations variable on both sides, Kuta equations with variables on both sides, Solving equationsvariables on both, Solving equations with variables on both sides of the, Solving linear equations, Work 2 2 solving equations in one variable, Solving equations containing fractions and decimals, Mathvine\n\nMethod: Perform operations to both sides of the equation in order to isolate the variable5) The answer should look like: x=20 or 20=x (Note: the variables and numbers may vary in your answerSome equations have solutionsHomework help math video for Algebra studentsThe idea is that you must move all of the variables to one side of the equation and all of the integers to the other side of the equation\n\nEquations with variables on both sides involve a few more steps, so I want to make sure you understand the simpler equations first before moving forward\n\nAdd 7 to both sidesThe variable must be isolated on one side of the equationRULE #1: you can add, subtract, multiply and divide by anything, as long as you do the same thing to both sides of the equals sign\n\n412 44 3 x x = = Divide both sides by 4Use Distributive Property, if necessaryI actually start the lesson with a short review of how to combine like termscom is the best site to take a look at! Holt McDougal Algebra 1 1-5 Solving Equations with Variables on Both Sides An identity is an equation that is true for all values of the variable\n\nFor example, to solve 5 2 – x = 3 3 x + 2, follow these steps: Take the log ofOct 06, 2018 · to solve a 2 step equation you combine like terms in needednotebook 2 September 18, 2017 Press the tabs to view details25 per minute per call requires a variable because the total amount will change based on the number of minutes\n\n) Example ProblemsSolve the equation and explain why it has an infinite number of solutionsDisplaying top 8 worksheets found for - Equations With Variables On Both Sides\n\nSolving Equations with Variables on Both SidesOn this page, you will find Algebra worksheets mostly for middle school students on algebra topics such as algebraic expressions, equations and graphing functions\n\nThis is a maze composed of 11 equations with variables on both sidesIf you encounter a variable on both sides of the equal sign, don't assume it's a typo and move on to the next problem; it may very well be there on purpose\n\nWorksheets are Solving linear equations variable on both sides, Multi step equations date period, Kuta equations with variables on both sides, Linear equations work, Solving equations with variables on both sides of the, Work 2 2 solving equations in one variable, SolvingDirections: Solve the multi step equations belowUse tiles to represent variables and constants, learn how to represent and solve algebra problem\n\n8 A - Write one-variable equations or inequalities with variables on both sides that represent problems using rational number coefficients and constants7n – 2 = 5n + 6 –5n –5n 2n – 2 = 6 Since n is multiplied by 2, divide both sides by 2 to undo the multiplicationThis method finds it use when the variables and constants are present on both sides of the linear equation to be solvedInequalities with Variables on Both Sides Worksheet (problems 11-24) Powered by Create your own unique website with customizable templates\n\nIf an input is given then it can easily show the result for the given number\n\nEmpty chairs at empty tables sheet music\n\npanasonic 2 lines cordless phone" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8932822,"math_prob":0.99747324,"size":11654,"snap":"2019-51-2020-05","text_gpt3_token_len":2624,"char_repetition_ratio":0.26429185,"word_repetition_ratio":0.1489578,"special_character_ratio":0.20945598,"punctuation_ratio":0.055609286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.997342,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T23:02:18Z\",\"WARC-Record-ID\":\"<urn:uuid:9b3a9c73-6f98-4caf-90fd-bab2cb1df6b0>\",\"Content-Length\":\"24501\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:017635b8-f39e-4b61-92dc-517351d0b68a>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1bdade0-2eae-4227-be66-857c24e0e0f3>\",\"WARC-IP-Address\":\"104.24.117.14\",\"WARC-Target-URI\":\"https://vfpun.ha3.in.net/equations-with-variables-on-both-sides.html\",\"WARC-Payload-Digest\":\"sha1:GOZXI5EA37DFWMOA7CHPPL4TKNASS6OE\",\"WARC-Block-Digest\":\"sha1:E5HWU44NVATGKWCMNCMMPB2NSRAMQDIP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250593994.14_warc_CC-MAIN-20200118221909-20200119005909-00464.warc.gz\"}"}
https://www.colorhexa.com/d7ebeb
[ "# #d7ebeb Color Information\n\nIn a RGB color space, hex #d7ebeb is composed of 84.3% red, 92.2% green and 92.2% blue. Whereas in a CMYK color space, it is composed of 8.5% cyan, 0% magenta, 0% yellow and 7.8% black. It has a hue angle of 180 degrees, a saturation of 33.3% and a lightness of 88.2%. #d7ebeb color hex could be obtained by blending #ffffff with #afd7d7. Closest websafe color is: #ccffff.\n\n• R 84\n• G 92\n• B 92\nRGB color chart\n• C 9\n• M 0\n• Y 0\n• K 8\nCMYK color chart\n\n#d7ebeb color description : Light grayish cyan.\n\n# #d7ebeb Color Conversion\n\nThe hexadecimal color #d7ebeb has RGB values of R:215, G:235, B:235 and CMYK values of C:0.09, M:0, Y:0, K:0.08. Its decimal value is 14150635.\n\nHex triplet RGB Decimal d7ebeb `#d7ebeb` 215, 235, 235 `rgb(215,235,235)` 84.3, 92.2, 92.2 `rgb(84.3%,92.2%,92.2%)` 9, 0, 0, 8 180°, 33.3, 88.2 `hsl(180,33.3%,88.2%)` 180°, 8.5, 92.2 ccffff `#ccffff`\nCIE-LAB 91.622, -6.572, -2.263 72.725, 79.861, 90.176 0.3, 0.329, 79.861 91.622, 6.951, 199.003 91.622, -10.823, -2.346 89.365, -11.126, 2.727 11010111, 11101011, 11101011\n\n# Color Schemes with #d7ebeb\n\n• #d7ebeb\n``#d7ebeb` `rgb(215,235,235)``\n• #ebd7d7\n``#ebd7d7` `rgb(235,215,215)``\nComplementary Color\n• #d7ebe1\n``#d7ebe1` `rgb(215,235,225)``\n• #d7ebeb\n``#d7ebeb` `rgb(215,235,235)``\n• #d7e1eb\n``#d7e1eb` `rgb(215,225,235)``\nAnalogous Color\n• #ebe1d7\n``#ebe1d7` `rgb(235,225,215)``\n• #d7ebeb\n``#d7ebeb` `rgb(215,235,235)``\n• #ebd7e1\n``#ebd7e1` `rgb(235,215,225)``\nSplit Complementary Color\n• #ebebd7\n``#ebebd7` `rgb(235,235,215)``\n• #d7ebeb\n``#d7ebeb` `rgb(215,235,235)``\n• #ebd7eb\n``#ebd7eb` `rgb(235,215,235)``\n• #d7ebd7\n``#d7ebd7` `rgb(215,235,215)``\n• #d7ebeb\n``#d7ebeb` `rgb(215,235,235)``\n• #ebd7eb\n``#ebd7eb` `rgb(235,215,235)``\n• #ebd7d7\n``#ebd7d7` `rgb(235,215,215)``\n• #a4d2d2\n``#a4d2d2` `rgb(164,210,210)``\n``#b5dada` `rgb(181,218,218)``\n• #c6e3e3\n``#c6e3e3` `rgb(198,227,227)``\n• #d7ebeb\n``#d7ebeb` `rgb(215,235,235)``\n• #e8f4f4\n``#e8f4f4` `rgb(232,244,244)``\n• #f9fcfc\n``#f9fcfc` `rgb(249,252,252)``\n• #ffffff\n``#ffffff` `rgb(255,255,255)``\nMonochromatic Color\n\n# Alternatives to #d7ebeb\n\nBelow, you can see some colors close to #d7ebeb. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #d7ebe6\n``#d7ebe6` `rgb(215,235,230)``\n• #d7ebe8\n``#d7ebe8` `rgb(215,235,232)``\n• #d7ebe9\n``#d7ebe9` `rgb(215,235,233)``\n• #d7ebeb\n``#d7ebeb` `rgb(215,235,235)``\n• #d7e9eb\n``#d7e9eb` `rgb(215,233,235)``\n• #d7e8eb\n``#d7e8eb` `rgb(215,232,235)``\n• #d7e6eb\n``#d7e6eb` `rgb(215,230,235)``\nSimilar Colors\n\n# #d7ebeb Preview\n\nThis text has a font color of #d7ebeb.\n\n``<span style=\"color:#d7ebeb;\">Text here</span>``\n#d7ebeb background color\n\nThis paragraph has a background color of #d7ebeb.\n\n``<p style=\"background-color:#d7ebeb;\">Content here</p>``\n#d7ebeb border color\n\nThis element has a border color of #d7ebeb.\n\n``<div style=\"border:1px solid #d7ebeb;\">Content here</div>``\nCSS codes\n``.text {color:#d7ebeb;}``\n``.background {background-color:#d7ebeb;}``\n``.border {border:1px solid #d7ebeb;}``\n\n# Shades and Tints of #d7ebeb\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #060c0c is the darkest color, while #feffff is the lightest one.\n\n• #060c0c\n``#060c0c` `rgb(6,12,12)``\n• #0d1919\n``#0d1919` `rgb(13,25,25)``\n• #132626\n``#132626` `rgb(19,38,38)``\n• #1a3434\n``#1a3434` `rgb(26,52,52)``\n• #204141\n``#204141` `rgb(32,65,65)``\n• #274e4e\n``#274e4e` `rgb(39,78,78)``\n• #2d5b5b\n``#2d5b5b` `rgb(45,91,91)``\n• #346868\n``#346868` `rgb(52,104,104)``\n• #3a7575\n``#3a7575` `rgb(58,117,117)``\n• #418282\n``#418282` `rgb(65,130,130)``\n• #488f8f\n``#488f8f` `rgb(72,143,143)``\n• #4e9c9c\n``#4e9c9c` `rgb(78,156,156)``\n• #55a9a9\n``#55a9a9` `rgb(85,169,169)``\n• #61b0b0\n``#61b0b0` `rgb(97,176,176)``\n• #6eb7b7\n``#6eb7b7` `rgb(110,183,183)``\n• #7bbdbd\n``#7bbdbd` `rgb(123,189,189)``\n• #89c4c4\n``#89c4c4` `rgb(137,196,196)``\n• #96caca\n``#96caca` `rgb(150,202,202)``\n• #a3d1d1\n``#a3d1d1` `rgb(163,209,209)``\n• #b0d7d7\n``#b0d7d7` `rgb(176,215,215)``\n• #bddede\n``#bddede` `rgb(189,222,222)``\n• #cae4e4\n``#cae4e4` `rgb(202,228,228)``\n• #d7ebeb\n``#d7ebeb` `rgb(215,235,235)``\n• #e4f2f2\n``#e4f2f2` `rgb(228,242,242)``\n• #f1f8f8\n``#f1f8f8` `rgb(241,248,248)``\n• #feffff\n``#feffff` `rgb(254,255,255)``\nTint Color Variation\n\n# Tones of #d7ebeb\n\nA tone is produced by adding gray to any pure hue. In this case, #e0e2e2 is the less saturated color, while #c5fdfd is the most saturated one.\n\n• #e0e2e2\n``#e0e2e2` `rgb(224,226,226)``\n• #dee4e4\n``#dee4e4` `rgb(222,228,228)``\n• #dce6e6\n``#dce6e6` `rgb(220,230,230)``\n• #d9e9e9\n``#d9e9e9` `rgb(217,233,233)``\n• #d7ebeb\n``#d7ebeb` `rgb(215,235,235)``\n• #d5eded\n``#d5eded` `rgb(213,237,237)``\n• #d2f0f0\n``#d2f0f0` `rgb(210,240,240)``\n• #d0f2f2\n``#d0f2f2` `rgb(208,242,242)``\n• #cef4f4\n``#cef4f4` `rgb(206,244,244)``\n• #cbf7f7\n``#cbf7f7` `rgb(203,247,247)``\n• #c9f9f9\n``#c9f9f9` `rgb(201,249,249)``\n• #c7fbfb\n``#c7fbfb` `rgb(199,251,251)``\n• #c5fdfd\n``#c5fdfd` `rgb(197,253,253)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #d7ebeb is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5117029,"math_prob":0.57750535,"size":3716,"snap":"2021-04-2021-17","text_gpt3_token_len":1708,"char_repetition_ratio":0.1247306,"word_repetition_ratio":0.011090573,"special_character_ratio":0.51453173,"punctuation_ratio":0.23430493,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96583676,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-27T16:17:05Z\",\"WARC-Record-ID\":\"<urn:uuid:06a447ff-c854-48a7-810a-aee85b523fdf>\",\"Content-Length\":\"36348\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:917379fa-7e68-4cd2-a358-763264282c75>\",\"WARC-Concurrent-To\":\"<urn:uuid:4aa6829a-21e1-4275-a185-9f10fe41cb27>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/d7ebeb\",\"WARC-Payload-Digest\":\"sha1:EHJA4Q5EAZQBVILQINOQIQPFZLHED4NL\",\"WARC-Block-Digest\":\"sha1:EWKWRDVUMQZITFOYIRQOJRWNJMOVBRXT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704828358.86_warc_CC-MAIN-20210127152334-20210127182334-00487.warc.gz\"}"}
https://robotics.stackexchange.com/questions/19163/what-is-required-to-get-the-roll-pitch-and-yaw-of-an-aerial-vehicle/19168
[ "# What is required to get the roll, pitch and yaw of an aerial vehicle?\n\nTL, DR : What is the method(in terms of sensors and algorithm) to get the roll, pitch and yaw of an aircraft at any instant.\n\nI am planning to build a hobby aircraft. I am so confused about which kind of sensors should I use and how to use them in order to get the roll, pitch and yaw angles of the aircraft.\n\nI think I also have some problems about understanding the concept.\n\nWhat are the ways/methods/agents to get the orientation of an aircraft at any instant?\n\nIn some sources there is something going on as the importance of order of the application of roll,pitch and yaw. But I cannot understand why this is related.\n\nI have used accelerometer values by inputting them into some formulas on the internet(which everybody uses but nobody explains well) to get the roll and pitch values. However could not understand how to manipulate them in order to meet my requirements.\n\nI also have basic understanding about what a gyroscope is.\n\n• \"some formulas on the internet(which everybody uses but nobody explains well) to get the roll and pitch values\" I would start at trying to understand those formula's. Can you put up those formula's here with the question? – vvy Jul 26 at 19:01\n• @vvy yes I know but I realy am bad at 3d geometric imagination. Do you have any suggestions that I can use to visualize such vectors/axes ? – muyustan Jul 26 at 19:03\n• The question you've asked is very general. Its a good idea to start with a smaller question. Usually, gyro rate sensors are used for RPY(rate,pitch,yaw). Might be helpful to understand what is a raw output of a gyro and then the transformation to obtain the quantity of interest (here it's RPY). Those equations you're referring to would probably explain this transformation. – vvy Jul 26 at 19:12\n• @vvy I am more clear about gyro calculations than accelemoter actualy, I know as follows(please revise and give me feedback about it) : gyro measures the rotation speed about an axis, so we can get the gyro readings(dps) over a short time(dt) duration and add to the p/r/y variable as dps*dt - an approximation of integartion -. So by this, with respect to an inital attitude we can have a relative p,r,y angles. But this has two disadvantages, the drift issue(errors cumulatively increases because of integration) and the problem mentioned here : youtu.be/4BoIE8YQwM8?t=596 – muyustan Jul 26 at 19:20\n• @vvy so I am not a total stranger, have the basics somehow but need to relate each of them. – muyustan Jul 26 at 19:21" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95958763,"math_prob":0.783504,"size":950,"snap":"2019-51-2020-05","text_gpt3_token_len":196,"char_repetition_ratio":0.118393235,"word_repetition_ratio":0.04819277,"special_character_ratio":0.20105264,"punctuation_ratio":0.08421053,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9596384,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T11:41:11Z\",\"WARC-Record-ID\":\"<urn:uuid:fa7c8484-e67c-4324-8c8f-7f86f284e87a>\",\"Content-Length\":\"145424\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8cb2a252-38f9-4de0-aebe-d7181ad2e9ff>\",\"WARC-Concurrent-To\":\"<urn:uuid:9a41331f-c48c-4e00-8fbe-1a65600fe3bc>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://robotics.stackexchange.com/questions/19163/what-is-required-to-get-the-roll-pitch-and-yaw-of-an-aerial-vehicle/19168\",\"WARC-Payload-Digest\":\"sha1:NLEPI5NWJPLVWIBNL6A3EYEHF7JUMDKK\",\"WARC-Block-Digest\":\"sha1:RCTNB6QJS3LOFMCXOPKQZADJKDJUOMBH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540543252.46_warc_CC-MAIN-20191212102302-20191212130302-00432.warc.gz\"}"}
https://www.knowledgeadda.com/2014/06/bpcl-management-trainee-electrical.html
[ "# BPCL Management Trainee Electrical Paper - 2010\n\nBPCL Management Trainee Electrical Paper - 2010\n\nFor More BPCL Question Papers - CLICK HERE\n\nBooks For BPCL Exams - CLICK HERE\n\nFor Other PSU Exam Papers - CLICK HERE\n\nFor GATE Question Papers - CLICK HERE\n\nHi All,\n\nI had attended the written test for Management trainee BPCL (Electrical) on 22nd January, 2010. I wanted to give all future aspirants some information regarding the pattern followed in the test. So here it is.\n\nTime duration: 2 hours\nTotal questions: 120 out of which\n-60 of general aptitude\n-60 of technical\n\nThe aptitude part is very easy, will give  the type of questions\n\n* English section with fill in the blanks. Passage comprehension and sentence correction. Note, no anonym synonym parts\n\n* Few question of logic deductions (all, some kinds), few on coding-decoding, few picture serieses, (didn't find many arithematic questions)\n\n* General knowledge stuff, very simple ones actually, who one ICC world cup 20-20 in 2008, h20 stands for(!), biggest planet among jupiter, saturn, venus and earth, whose birthday is Teachers day, father of constitution, number of lions in ashoka stupa, who won oscar recently and the like. See simple ones.\n\nIn the tech part, again questions were easy if you are clear enough on the basics of Electrical machines, Control systems, Network, Power distribution and the lot.\n\nWill list a few questions I remember.\n\n1) Given a  capacitor with its capacitance and the value of voltage across, find the energy stored\n\n2) To reduce cost of power generation,what should be done, increase/decrease diversity factor and  increase/decrease load factor.\n\n3) SCR works in forward bias/ rev bias./both\n\n4) An op-amp with a shunt resistor in inverting terminal and a diode across acts as a half wave rectifier/ full wave rect/log amplifier\n\n5) Configuration of a 2 kb ROM with 512* 4 cells.\n\n6) How many electrons flowing per sec constitute 1 amp current\n\n7) Any quist plot was given with a curve to left of the plane is it stable/ marginaly stable/marginally unstable?\n\n8) Given a maximum demand and connected load of a consumer, find demand factor\n\n9) Principle of transformer\n\n10) An electronic voltmeter has - transistor and sme other options.\n\n11) What does Shmitt trigger generate.\n\n12) What is the decisive factor for design of EHV. corona, switching voltage.\n\n13) What is the max voltage on low tension and high tension respectively. 11 kv & 33 kv, 1 kv  & 66 kv.\n\n14) For a chopper circuit to work..what is reqd forced commutation/natural commutation/both\n\n15) Something abt JFET Q point. Is it bbased on drain resistance/ source resistance.\n\n16) In a parallel resonant circuit, values of L and C are given, find value of R at 10Krad.\n\n17) For a resonant circuit, admittance is max/ impedance is max/ impedance is reactive. Which is corrcet\n\n18) To have max heating how shud  two identical coils be connected parallel/series\n\n19) State equation for nth order system\n\n20) Questions about an equation, was it underdamped or not\n\n21) Given a transfer funtion, find its impulse input\n\n22) For a unit feedback system, out put is nearly equal to input/nearly equal/.greater\n\n23) Two system with tran function as g1 ans g2 connected in series, what is the net transfer fucntion g1*g2/g1 + g2?\n\n24) How is reactive power generated at load centres. shunt capacitor, series capacitor, tap changing transformer\n\n25) To measure power factor we use wattmeter/voltmeter/both ammeter and voltmeter.\n\n26) What is the amount of resistor to be conected across ammeter to measure a current\n(i) Given its max rating n internal impedance\n\n27) Question to find equivalent resistance on smth similar to wheatstone bridge\n\n28) Simlar question of network abt voltage drops n all\n\n29) Why is reactance less for a running motor\n\n30) What happens to a running motor when fault occurs on one of the lines\n\n31) What kind of relay is earth fault relay\n\n32) Which results in a symm fault phase top phase fault, single phase fault, two phase fault.\n\nSo overall most of the questions are basic stuff. All you need to do is to prepare the core subjects well.\n\nGood luck all of you." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85215855,"math_prob":0.8382103,"size":4249,"snap":"2022-40-2023-06","text_gpt3_token_len":1032,"char_repetition_ratio":0.0852768,"word_repetition_ratio":0.002793296,"special_character_ratio":0.232996,"punctuation_ratio":0.09363296,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9605325,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T06:12:52Z\",\"WARC-Record-ID\":\"<urn:uuid:0a96d656-3165-4292-a080-917f64d0cf37>\",\"Content-Length\":\"230132\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:60dc0759-b9cd-4933-b327-89c741132b55>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c65d95c-33d6-4e2c-a5de-e34671b7686b>\",\"WARC-IP-Address\":\"142.251.16.121\",\"WARC-Target-URI\":\"https://www.knowledgeadda.com/2014/06/bpcl-management-trainee-electrical.html\",\"WARC-Payload-Digest\":\"sha1:HWVQC6JHWR2KHGAV5SHYWFOPIIWKHA3P\",\"WARC-Block-Digest\":\"sha1:VXP5AYCLM4JFMWI4M2SZSTOM7PVWWYOW\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337287.87_warc_CC-MAIN-20221002052710-20221002082710-00568.warc.gz\"}"}
https://schoolbag.info/chemistry/central/136.html
[ " THE CONCEPT OF EQUILIBRIUM - CHEMICAL EQUILIBRIUM - CHEMISTRY THE CENTRAL SCIENCE\n\n## 15 CHEMICAL EQUILIBRIUM", null, "TRAFFIC ENTERING AND LEAVING a city.\n\n15.1 THE CONCEPT OF EQUILIBRIUM\n\nWe begin by examining reversible reactions and the concept of equilibrium.\n\n15.2 THE EQUILIBRIUM CONSTANT\n\nWe define the equilibrium constant based on rates of forward and reverse reactions, and learn how to write equilibrium-constant expressions for homogeneous reactions.\n\n15.3 UNDERSTANDING AND WORKING WITH EQUILIBRIUM CONSTANTS\n\nWe learn to interpret the magnitude of an equilibrium constant and how its value depends on the way the corresponding chemical equation is expressed.\n\n15.4 HETEROGENEOUS EQUILIBRIA\n\nWe then learn how to write equilibrium-constant expressions for heterogeneous reactions.\n\n15.5 CALCULATING EQUILIBRIUM CONSTANTS\n\nWe see that the value of an equilibrium constant can be calculated from equilibrium concentrations of reactants and products.\n\n15.6 APPLICATIONS OF EQUILIBRIUM CONSTANTS\n\nWe also see that equilibrium constants can be used to predict equilibrium concentrations of reactants and products and to determine the direction in which a reaction mixture must proceed to achieve equilibrium.\n\n15.7 LE CHÂTELIER'S PRINCIPLE\n\nWe discuss Le Châtelier's principle, which predicts how a system at equilibrium responds to changes in concentration, volume, pressure, and temperature.\n\nTO BE IN EQUILIBRIUM IS to be in a state of balance. A tug of war in which the two sides pull with equal force so that the rope does not move is an example of a static equilibrium, one in which an object is at rest. Equilibria can also be dynamic, as illustrated in the chapter-opening photograph, which shows cars traveling in both directions over a bridge that serves as the entry to a city. If the rate at which cars leave the city equals the rate at which they enter, the two opposing processes are in balance, and the net number of cars in the city is constant.\n\nWe have already encountered several instances of dynamic equilibrium. For example, the vapor above a liquid in a closed container is in equilibrium with the liquid phase", null, "(Section 11.5), which means that the rate at which molecules escape from the liquid into the gas phase equals the rate at which molecules in the gas phase become part of the liquid. Similarly, in a saturated sodium chloride solution in contact with undissolved sodium chloride, the solid is in equilibrium with the ions dispersed in water.", null, "(Section 13.2) The rate at which ions leave the solid surface equals the rate at which other ions leave the liquid and become part of the solid.\n\nIn this chapter we consider dynamic equilibria in chemical reactions. Chemical equilibrium occurs when opposing reactions proceed at equal rates: The rate at which the products form from the reactants equals the rate at which the reactants form from the products. As a result, concentrations cease to change, making the reaction appear to be stopped. Chemical equilibria are involved in many natural phenomena and play important roles in many industrial processes. In this and the next two chapters, we will explore chemical equilibrium in some detail. Later, inChapter 19, we will learn how to relate chemical equilibria to thermodynamics. Here we learn how to express the equilibrium state of a reaction in quantitative terms and study the factors that determine the relative concentrations of reactants and products in equilibrium mixtures.\n\n### 15.1 THE CONCEPT OF EQUILIBRIUM\n\nLet's examine a simple chemical reaction to see how it reaches an equilibrium state—a mixture of reactants and products whose concentrations no longer change with time. We begin with N2O4, a colorless substance that dissociates to form brown NO2.", null, "FIGURE 15.1 shows a sample of frozen N2O4 inside a sealed tube. The solid N2O4 vaporizes as it is warmed above its boiling point (21.2 °C), and the gas turns darker as the colorless N2O4 gas dissociates into brown NO2 gas. Eventually, even though there is still N2O4 in the tube, the color stops getting darker because the system reaches equilibrium. We are left with an equilibrium mixture of N2O4 and NO2 in which the concentrations of the gases no longer change as time passes. Because the reaction is in a closed system, where no gases can escape, equilibrium will eventually be reached.", null, "GO FIGURE\n\nHow can you tell if you are at equilibrium?", null, "", null, "FIGURE 15.1 The equilibrium between NO2 and N2O4.\n\nThe equilibrium mixture results because the reaction is reversible: N2O4 can form NO2, and NO2 can form N2O4. This situation is represented by writing the equation for the reaction with two half arrows pointing in opposite directions:", null, "(Section 4.1)", null, "We can analyze this equilibrium using our knowledge of kinetics. Let's call the decomposition of N2O4 the forward reaction and the formation of N2O4 the reverse reaction. In this case, both the forward reaction and the reverse reaction are elementary reactions. As we learned in Section 14.6, the rate laws for elementary reactions can be written from their chemical equations:", null, "", null, "At equilibrium, the rate at which NO2 forms in the forward reaction equals the rate at which N2O4 forms in the reverse reaction:", null, "Rearranging this equation gives", null, "From Equation 15.5 we see that the quotient of two rate constants is another constant. We also see that, at equilibrium, the ratio of the concentration terms equals this same constant. (We consider this constant, called the equilibrium constant, in Section 15.2.) It makes no difference whether we start with N2O4 or with NO2, or even with some mixture of the two. At equilibrium, at a given temperature, the ratio equals a specific value. Thus, there is an important constraint on the proportions of N2O4 and NO2 at equilibrium.\n\nOnce equilibrium is established, the concentrations of N2O4 and NO2 no longer change, as shown in", null, "FIGURE 15.2 (a). However, the fact that the composition of the equilibrium mixture remains constant with time does not mean that N2O4 and NO2 stop reacting. On the contrary, the equilibrium is dynamic—which means some N2O4 is always converting to NO2 and some NO2 is always converting to N2O4. At equilibrium, however, the two processes occur at the same rate, as shown in Figure 15.2(b).\n\nWe learn several important lessons about equilibrium from this example:\n\n• At equilibrium, the concentrations of reactants and products no longer change with time.\n\n• For equilibrium to occur, neither reactants nor products can escape from the system.\n\n• At equilibrium, a particular ratio of concentration terms equals a constant.", null, "GO FIGURE\n\nAt equilibrium, are the concentrations of NO2 and N2O4 equal?", null, "", null, "FIGURE 15.2 Achieving chemical equilibrium in the", null, "reaction. Equilibrium occurs when the rate of the forward reaction equals the rate of the reverse reaction.", null, "GIVE IT SOME THOUGHT\n\na. Which quantities are equal in a dynamic equilibrium?\n\nb. If the rate constant for the forward reaction in Equation 15.1 is larger than the rate constant for the reverse reaction, will the constant in Equation 15.5 be greater than 1 or smaller than 1?\n\n" ]
[ null, "https://schoolbag.info/chemistry/central/central.files/image1936.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1815.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1815.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1446.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1450.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1937.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1437.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1815.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1938.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1939.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1940.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1941.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1942.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1446.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1450.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1943.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1435.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1944.jpg", null, "https://schoolbag.info/chemistry/central/central.files/image1444.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90874034,"math_prob":0.9793025,"size":7015,"snap":"2022-27-2022-33","text_gpt3_token_len":1531,"char_repetition_ratio":0.19283982,"word_repetition_ratio":0.028268551,"special_character_ratio":0.20228082,"punctuation_ratio":0.10278207,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99420565,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,2,null,null,null,null,null,null,null,null,null,2,null,null,null,null,null,2,null,2,null,2,null,2,null,2,null,null,null,null,null,2,null,null,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T04:18:15Z\",\"WARC-Record-ID\":\"<urn:uuid:78a3b3cc-e352-4b57-96c2-b7368865e888>\",\"Content-Length\":\"17859\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34ae0ea4-9262-411d-9b79-5ed649c58070>\",\"WARC-Concurrent-To\":\"<urn:uuid:0c0c4949-5594-4489-b046-c8d78541f180>\",\"WARC-IP-Address\":\"31.131.26.27\",\"WARC-Target-URI\":\"https://schoolbag.info/chemistry/central/136.html\",\"WARC-Payload-Digest\":\"sha1:4GETGPIHJ4YHDRC62DFVCZ6R4TRHB42J\",\"WARC-Block-Digest\":\"sha1:LZOUUB4N4QLE4MFTFFVJZGDMRNGF5CEK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104660626.98_warc_CC-MAIN-20220706030209-20220706060209-00385.warc.gz\"}"}
https://edu-answer.com/mathematics/question12071326
[ "", null, ", 08.11.2019 13:31 afifakiran5226\n\n# 1. what is (f⋅g)(x)? f(x)=x^4−9 g(x)=x^3+9 enter your answer in the box. 2. what is (f−g)(x)? f(x)=x^3−2x^2+12x−6 g(x)=4x^2−6x+4 enter your answer in the box.", null, "", null, "", null, "### Another question on Mathematics", null, "Mathematics, 21.06.2019 16:20\nAn architect is designing a water fountain for a park she uses the given function to model the water jet flowing from the fountain nozzles where h(x) gives the height of the water jugs in feet x feet from the starting point h(x)=-1/20x+x+15", null, "Mathematics, 21.06.2019 19:10\nRevirite the formula to determine the area covered by one can of paint. alyssa and her father are going to paint the house this summer. they know they'll need 6 cans of paint. enter the correct answer. 000 done a formula that describes this scenario is (licierali 000 oo 6 w w = the total area of the walls p = area covered by a can of paint", null, "Mathematics, 21.06.2019 20:00\nAclothing store has the sign shown in the shop window. pani sees the sign and wants to buy 3 shirts and 2 pairs of jeans. the cost of each shirt before the discount is \\$12, and the cost of each pair of jeans is \\$19 before the discount. write and simplify an expression to find the amount pani pays if a \\$3 discount is applied to her total", null, "Mathematics, 21.06.2019 22:00\nThe correlation coefficient between two quantitative variables is approximately 0.04. what does the value of this correlation coefficient indicate about how well the model fits the data a. the correlation coefficient is not with in the correct range. b. the model is a good fit. c. no conclusion can be drawn regarding how well the model fits the data. d. the model is not a good fit.\n1. what is (f⋅g)(x)?\n\nf(x)=x^4−9\n\ng(x)=x^3+9\n\nQuestions", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Questions on the website: 13558162" ]
[ null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/User.png", null, "https://edu-answer.com/tpl/images/ask_question.png", null, "https://edu-answer.com/tpl/images/ask_question_mob.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/en.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/istoriya.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/istoriya.png", null, "https://edu-answer.com/tpl/images/cats/istoriya.png", null, "https://edu-answer.com/tpl/images/cats/himiya.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/en.png", null, "https://edu-answer.com/tpl/images/cats/biologiya.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85303074,"math_prob":0.9867314,"size":2039,"snap":"2021-31-2021-39","text_gpt3_token_len":682,"char_repetition_ratio":0.15823096,"word_repetition_ratio":0.09309309,"special_character_ratio":0.35654733,"punctuation_ratio":0.17760618,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99591756,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T11:04:37Z\",\"WARC-Record-ID\":\"<urn:uuid:6f4f9d91-9613-4b28-afb9-23a8434ec77b>\",\"Content-Length\":\"75589\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e98bd18a-0a31-44fb-b29e-db1ed4a7b5c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:9748bea0-8b58-431f-ae4b-1a66b456ac5e>\",\"WARC-IP-Address\":\"104.21.68.106\",\"WARC-Target-URI\":\"https://edu-answer.com/mathematics/question12071326\",\"WARC-Payload-Digest\":\"sha1:QNJXEMK575D6D7WF3BYJ4GAFJHD2GEAS\",\"WARC-Block-Digest\":\"sha1:PTOSWHEUUNQ7FKQBWHC53BXZIIFLRL4J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153709.26_warc_CC-MAIN-20210728092200-20210728122200-00644.warc.gz\"}"}
https://studysoup.com/tsg/calculus/382/differential-equations-with-boundary-value-problems/chapter/18313/11
[ "×\n×\n\n# Solutions for Chapter 11: ORTHOGONAL FUNCTIONS AND FOURIER SERIES\n\n## Full solutions for Differential Equations with Boundary-Value Problems | 7th Edition\n\nISBN: 9780495108368\n\nSolutions for Chapter 11: ORTHOGONAL FUNCTIONS AND FOURIER SERIES\n\nSolutions for Chapter 11\n4 5 0 310 Reviews\n11\n3\n##### ISBN: 9780495108368\n\nSince 25 problems in chapter 11: ORTHOGONAL FUNCTIONS AND FOURIER SERIES have been answered, more than 14338 students have viewed full step-by-step solutions from this chapter. Differential Equations with Boundary-Value Problems was written by and is associated to the ISBN: 9780495108368. This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Differential Equations with Boundary-Value Problems, edition: 7. Chapter 11: ORTHOGONAL FUNCTIONS AND FOURIER SERIES includes 25 full step-by-step solutions.\n\nKey Calculus Terms and definitions covered in this textbook\n• Absolute maximum\n\nA value ƒ(c) is an absolute maximum value of ƒ if ƒ(c) ? ƒ(x) for all x in the domain of ƒ.\n\nP(A or B) = P(A) + P(B) - P(A and B). If A and B are mutually exclusive events, then P(A or B) = P(A) + P(B)\n\n• Compounded k times per year\n\nInterest compounded using the formula A = Pa1 + rkbkt where k = 1 is compounded annually, k = 4 is compounded quarterly k = 12 is compounded monthly, etc.\n\n• Conversion factor\n\nA ratio equal to 1, used for unit conversion\n\n• Direction angle of a vector\n\nThe angle that the vector makes with the positive x-axis\n\n• Exponential growth function\n\nGrowth modeled by ƒ(x) = a ? b a > 0, b > 1 .\n\n• Factoring (a polynomial)\n\nWriting a polynomial as a product of two or more polynomial factors.\n\n• Half-plane\n\nThe graph of the linear inequality y ? ax + b, y > ax + b y ? ax + b, or y < ax + b.\n\n• Inverse relation (of the relation R)\n\nA relation that consists of all ordered pairs b, a for which a, b belongs to R.\n\n• Nappe\n\nSee Right circular cone.\n\n• Paraboloid of revolution\n\nA surface generated by rotating a parabola about its line of symmetry.\n\n• Parameter\n\nSee Parametric equations.\n\n• Polar coordinates\n\nThe numbers (r, ?) that determine a point’s location in a polar coordinate system. The number r is the directed distance and ? is the directed angle\n\n• Polynomial in x\n\nAn expression that can be written in the form an x n + an-1x n-1 + Á + a1x + a0, where n is a nonnegative integer, the coefficients are real numbers, and an ? 0. The degree of the polynomial is n, the leading coefficient is an, the leading term is anxn, and the constant term is a0. (The number 0 is the zero polynomial)\n\n• Random behavior\n\nBehavior that is determined only by the laws of probability.\n\n• Regression model\n\nAn equation found by regression and which can be used to predict unknown values.\n\n• Sample space\n\nSet of all possible outcomes of an experiment.\n\n• Scientific notation\n\nA positive number written as c x 10m, where 1 ? c < 10 and m is an integer.\n\n• Second-degree equation in two variables\n\nAx 2 + Bxy + Cy2 + Dx + Ey + F = 0, where A, B, and C are not all zero.\n\n• Upper bound test for real zeros\n\nA test for finding an upper bound for the real zeros of a polynomial.\n\n×" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8349791,"math_prob":0.99089557,"size":4830,"snap":"2020-24-2020-29","text_gpt3_token_len":1295,"char_repetition_ratio":0.11645255,"word_repetition_ratio":0.10201794,"special_character_ratio":0.29585922,"punctuation_ratio":0.18396227,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99964416,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-10T02:43:45Z\",\"WARC-Record-ID\":\"<urn:uuid:4f2bb221-7db8-48ae-b954-6ed81bf6f70e>\",\"Content-Length\":\"42061\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be4b5bac-088e-45bc-ab1b-765cca04afd3>\",\"WARC-Concurrent-To\":\"<urn:uuid:24d77f2a-0a65-44cc-b326-3b0f8b13347e>\",\"WARC-IP-Address\":\"54.189.254.180\",\"WARC-Target-URI\":\"https://studysoup.com/tsg/calculus/382/differential-equations-with-boundary-value-problems/chapter/18313/11\",\"WARC-Payload-Digest\":\"sha1:7MCCGXD7F54DEE4GIJG2WGAGNRHQ6W2F\",\"WARC-Block-Digest\":\"sha1:UAPNU4HB5PX6XGGTFWUFG24YPCW33XPM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655902496.52_warc_CC-MAIN-20200710015901-20200710045901-00570.warc.gz\"}"}
https://pythonexamples.org/python-tuple/
[ "# Python Tuple\n\n## Python Tuple\n\nIn Python, Tuple is a collection of items. In Python Tuple, items are ordered, and therefore can be accessed using index.\n\nAlso, an important property to note about Python Tuple is that, it is immutable. You cannot change the items of a tuple in Python.\n\nBut, if it is really necessary for you to change a tuple, you may convert the tuple to a list, modify it, and convert back to a list. This is only a workaround to modify a tuple.\n\n### Examples for Python Tuple\n\nIn Python, a tuple is defined using parenthesis. The elements are defined inside parenthesis and are comma separated.\n\nA Python Tuple can contain a definite number of items belonging to different datatypes.\n\nFollowing are some of the examples for Tuples in Python.\n\n``````tuple1 = (14, 52, 17, 24)\ntuple2 = ('Hello', 'Hi', 'Good Morning')\ntuple3 = (1, 'Tony', 25)``````\nRun this program\n\n### Access Tuple Items using Index\n\nAs Tuple items are ordered, you can access them using index as that of items in a Python List.\n\nPython Program\n\n``````tuple1 = (14, 52, 17, 24)\nprint(tuple1)\nprint(tuple1)``````\nRun this program\n\nOutput\n\n``````52\n24``````\n\n### Python Tuple Length\n\nLike any other collection in Python, you may use len() builtin function to get the length of a tuple.\n\nPython Program\n\n``````tuple1 = (14, 52, 17, 24)\nprint( len(tuple1) )``````\nRun this program\n\nOutput\n\n``4``\n\n### Iterate over items of Python Tuple\n\nYou can iterate over Python Tuple’s items using for loop.\n\nPython Program\n\n``````tuple1 = (14, 52, 17, 24)\nfor item in tuple1:\nprint(item)``````\nRun this program\n\nOutput\n\n``````14\n52\n17\n24``````\n\nOr you can also use while loop with tuple length and index to iterate over tuple’s items.\n\nPython Program\n\n``````tuple1 = (14, 52, 17, 24)\n\nindex = 0\nwhile index<len(tuple1):\nprint(tuple1[index])\nindex = index + 1``````\nRun this program\n\nOutput\n\n``````14\n52\n17\n24``````\n\n### Summary\n\nIn this Python Tutorial, we learned what a Tuple is in Python and how to define and access it in different ways." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7732104,"math_prob":0.86141765,"size":1856,"snap":"2020-45-2020-50","text_gpt3_token_len":482,"char_repetition_ratio":0.16576675,"word_repetition_ratio":0.079027355,"special_character_ratio":0.2796336,"punctuation_ratio":0.12793733,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.978307,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-31T18:35:24Z\",\"WARC-Record-ID\":\"<urn:uuid:107b0782-d612-407a-9cfd-80f837739f0b>\",\"Content-Length\":\"19967\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e089c0e8-ddff-43ad-af6c-618dc09bcbd9>\",\"WARC-Concurrent-To\":\"<urn:uuid:08a696c7-a252-469f-85d1-f499caf06093>\",\"WARC-IP-Address\":\"54.192.30.8\",\"WARC-Target-URI\":\"https://pythonexamples.org/python-tuple/\",\"WARC-Payload-Digest\":\"sha1:QWFLLBIFH6HSRMTRFIRBE723TUUGUU5A\",\"WARC-Block-Digest\":\"sha1:D5BEBTSWZUKTCA4PXRHE7ESB6WRWRKA5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107922411.94_warc_CC-MAIN-20201031181658-20201031211658-00020.warc.gz\"}"}
https://codegolf.stackexchange.com/questions/16615/how-do-i-find-the-factorial-of-a-positive-number/16673
[ "# How do I find the factorial of a positive number?\n\nThe challenge:\n\nWrite a program or a function that inputs a positive number and returns its factorial.\n\nNote: This is a question. Please do not take the question and/or answers seriously. More information here. Every question is also a question, so the highest voted answer wins.\n\n– Petr\nDec 29, 2013 at 7:46\n• -1, sorry, because we're getting a huge flood of these code trolling questions and this does not really add anything new to them Dec 29, 2013 at 11:25\n• – Paul\nDec 30, 2013 at 10:08\n• Code-trolling is in the process of being removed, as per the official stance. This question has a fair amount of votes with many answers, many of which are extremely highly voted. It recieved just over 50% \"delete\" votes on the poll, but it is unique in that it recieved so many answers and votes, so I am locking it for historical significance. May 12, 2014 at 0:13\n\nThis is a very simple numerical computing problem that we can solve with Stirling's approximation:", null, "As you can see, that formula features a square root, which we will also need a way to approximate. We will choose the so-called \"Babylonian method\" for that because it is arguably the simplest one:", null, "Note that computing the square root this way is a good example of recursion.\n\nPutting it all together in a Python program gives us the following solution to your problem:\n\ndef sqrt(x, n): # not the same n as below\nreturn .5 * (sqrt(x, n - 1) + x / sqrt(x, n - 1)) if n > 0 else x\n\nn = float(raw_input())\nprint (n / 2.718) ** n * sqrt(2 * 3.141 * n, 10)\n\n\nWith a simple modification the above program can output a neat table of factorials:\n\n1! = 0.92215\n2! = 1.91922\n3! = 5.83747\n4! = 23.51371\n5! = 118.06923\n6! = 710.45304\n7! = 4983.54173\n8! = 39931.74015\n9! = 359838.58817\n\n\nThis method should be sufficiently accurate for most applications.\n\n• +1 The simplicity and accuracy of this method makes it a clear winner Dec 31, 2013 at 12:09\n\n## C#\n\nSorry, but I hate recursive function.\n\npublic string Factorial(uint n) {\nreturn n + \"!\";\n}\n\n• Technically, you've satisfied the brief! ;) +1 for brief abuse Jan 6, 2014 at 5:30\n\nJava\n\npublic int factorial ( int n ) {\nswitch(n){\ncase 0: return 1;\ncase 1: return 1;\ncase 2: return 2;\ncase 3: return 6;\ncase 4: return 24;\ncase 5: return 120;\ncase 6: return 720;\ncase 7: return 5040;\ncase 8: return 40320;\ncase 9: return 362880;\ncase 10: return 3628800;\ncase 11: return 39916800;\ncase 12: return 479001600;\ndefault : throw new IllegalArgumentException();\n}\n}\n\n• I tried it - very efficient. Will ship with next release. :) Dec 29, 2013 at 22:33\n• Beside the \"magical numbers syndrom\", this could actually be a good implementation as long as n<13, much less stacks. Write it \"case 4: return 4*3*2;\" and you'd have a decent class, much faster than the old recursive one. Jan 30, 2014 at 17:16\n• @Fabinout, the implementation is correct even for n>=13. 13!>Integer.MAX_VALUE. Jan 30, 2014 at 21:11\n\n# Python\n\nOf course the best way how to solve any problem is to use regular expressions:\n\nimport re\n\ndef multiple_replace(dict, text):\n# Create a regular expression from the dictionary keys\nregex = re.compile(\"(%s)\" % \"|\".join(map(re.escape, dict.keys())))\n# Repeat while any replacements are made.\ncount = -1\nwhile count != 0:\n# For each match, look-up corresponding value in dictionary.\n(text, count) = regex.subn(lambda mo: dict[mo.string[mo.start():mo.end()]], text)\nreturn text\n\nfdict = {\n'A': '@',\n'B': 'AA',\n'C': 'BBB',\n'D': 'CCCC',\n'E': 'DDDDD',\n'F': 'EEEEEE',\n'G': 'FFFFFFF',\n'H': 'GGGGGGGG',\n'I': 'HHHHHHHHH',\n'J': 'IIIIIIIIII',\n'K': 'JJJJJJJJJJJ',\n'L': 'KKKKKKKKKKKK',\n'M': 'LLLLLLLLLLLLL',\n'N': 'MMMMMMMMMMMMMM',\n'O': 'NNNNNNNNNNNNNNN',\n'P': 'OOOOOOOOOOOOOOOO',\n'Q': 'PPPPPPPPPPPPPPPPP',\n'R': 'QQQQQQQQQQQQQQQQQQ',\n'S': 'RRRRRRRRRRRRRRRRRRR',\n'T': 'SSSSSSSSSSSSSSSSSSSS',\n'U': 'TTTTTTTTTTTTTTTTTTTTT',\n'V': 'UUUUUUUUUUUUUUUUUUUUUU',\n'W': 'VVVVVVVVVVVVVVVVVVVVVVV',\n'X': 'WWWWWWWWWWWWWWWWWWWWWWWW',\n'Y': 'XXXXXXXXXXXXXXXXXXXXXXXXX',\n'Z': 'YYYYYYYYYYYYYYYYYYYYYYYYYY'}\n\ndef fact(n):\nreturn len(multiple_replace(fdict, chr(64 + n)))\n\nif __name__ == \"__main__\":\nprint fact(7)\n\n• Of course indeed :) Jan 3, 2014 at 21:27\n\nShort code is efficient code, so try this.\n\nfac = length . permutations . flip take [1..]\n\n\nWhy it's trolling:\n\nI'd laugh at any coder who wrote this... The inefficiency is beautiful. Also probably incomprehensible to any Haskell programmer who actually can't write a factorial function.\n\nEdit: I posted this a while ago now, but I thought I'd clarify for future people and people who can't read Haskell.\n\nThe code here takes the list of the numbers 1 to n, creates the list of all permutations of that list, and returns the length of that list. On my machine it takes about 20 minutes for 13!. And then it ought to take four hours for 14! and then two and a half days for 15!. Except that at some point in there you run out of memory.\n\nEdit 2: Actually you probably won't run out of memory due to this being Haskell (see the comment below). You might be able to force it to evaluate the list and hold it in memory somehow, but I don't know enough about optimizing (and unoptimizing) Haskell to know exactly how to do that.\n\n• Hideous and yet so elegant, all at the same time.\n– PLL\nJan 10, 2014 at 11:21\n• Are you sure about the memory issue? At any one point, you need to hold in memory: - the list [1..n]. - One particular permutation of [1..n], consed to a thunk for the rest of the permutations (polynomial in n). - An accumulator for the length function. Mar 11, 2014 at 6:47\n• Fair point, probably not actually. Didn't really think about it too much. I'll add a comment at the bottom.\n– jgon\nMar 17, 2014 at 18:23\n\n## C#\n\nSince this is a math problem, it makes sense to use an application specifically designed to solve math problems to do this calculation...\n\n### Step 1:\n\nInstall MATLAB. A trial will work, I think, but this super-complicated problem is likely important enough to merit purchasing the full version of the application.\n\n### Step 2:\n\nInclude the MATLAB COM component in your application.\n\n### Step 3:\n\npublic string Factorial(uint n) {\nMLApp.MLApp matlab = new MLApp.MLApp();\nreturn matlab.Execute(String.Format(\"factorial({0})\", n);\n}\n\n• Matlab for students starts at $100. Professional versions or site licenses can go way into the thousands. Dec 30, 2013 at 5:43 • Moshe Katz - justified because factorials. Jan 22, 2014 at 21:57 ## C# Factorials are a higher level math operation that can be difficult to digest all in one go. The best solution in programming problems like this, is to break down one large task into smaller tasks. Now, n! is defined as 1*2*...*n, so, in essence repeated multiplication, and multiplication is nothing but repeated addition. So, with that in mind, the following solves this problem: long Factorial(int n) { if(n==0) { return 1; } Stack<long> s = new Stack<long>(); for(var i=1;i<=n;i++) { s.Push(i); } var items = new List<long>(); var n2 = s.Pop(); while(s.Count >0) { var n3 = s.Pop(); items.AddRange(FactorialPart(n2,n3)); n2 = items.Sum(); } return items.Sum()/(n-1); } IEnumerable<long> FactorialPart(long n1, long n2) { for(var i=0;i<n2;i++){ yield return n1; } } • You have a bottleneck sending all this through one CPU or core, which I think I may have solved in my answer :-) – Paul Dec 30, 2013 at 10:19 #include <math.h> int factorial(int n) { const double g = 7; static const double p[] = { 0.99999999999980993, 676.5203681218851, -1259.1392167224028, 771.32342877765313, -176.61502916214059, 12.507343278686905, -0.13857109526572012, 9.9843695780195716e-6, 1.5056327351493116e-7 }; double z = n - 1 + 1; double x = p; int i; for ( i = 1; i < sizeof(p)/sizeof(p); ++i ) x += p[i] / (z + i); return sqrt(2 * M_PI) * pow(z + g + 0.5, z + 0.5) * exp(-z -g -0.5) * x + 0.5; } Trolls: • A 100% correct way of computing factorial that completely misses the point of either doing it iteratively or recursively. • You have no idea why it works and could not generalize it to do anything else. • More costly than just computing it with integer math. • The most obvious \"suboptimal\" code (z = n - 1 + 1) is actually self-documenting if you know what's going on. • For extra trolling I should compute p[] using a recursive calculation of the series coefficients! • Is there any point in - 1 + 1 here? My compiler optimizes it (it's not floating point number where optimizing code like this could be dangerous), so it appears to be unneeded. Jan 14, 2014 at 14:36 • @xfix: double z = n - 1 is part of the approximation of the gamma function. The + 1 is from the relationship that gamma(n + 1) = n! for integer n. Jan 14, 2014 at 14:39 We all know from college that the most efficient way to calculate a multiplication is through the use of logarithms. After all, why else would people use logarithm tables for hundreds of years? So from the identity a*b=e^(log(a)+log(b)) we form the following Python code: from math import log,exp def fac_you(x): return round(exp(sum(map(log,range(1,x+1))))) for i in range(1,99): print i,\":\",fac_you(i) It creates a list of numbers from 1 to x, (the +1 is needed because Python sucks) calculates the logarithm of each, sums the numbers, raises the e to the power of the sum and finally rounds the value to the nearest integer (because Python sucks). Python has a built-in function for calculating factorials, but it only works for integers, so it can't produce big numbers (because Python sucks). This is why the function above is needed. Btw, a general tip for students is that if something doesn't work as expected, it's probably because the language sucks. • Wish I could give some extra votes there for the description, but Python sucks Jan 24, 2014 at 14:45 • I laughed at \"fac you\" Mar 17, 2014 at 19:56 Unfortunately, Javascript lacks a built-in way to compute the factorial. But, you can use its meaning in combinatorics to determine the value nevertheless: The factorial of a number n is the number of permutations of an list of that size. So, we can generate every list of n-digit number, check if it is a permutation, and if so, increment a counter: window.factorial = function($nb_number) {\n$nb_trials = 1 for($i = 0; $i <$nb_number; $i++)$nb_trials *= $nb_number$nb_successes = 0\n__trying__:\nfor($nb_trial = 0;$nb_trial < $nb_trials;$nb_trial++){\n$a_trial_split = new Array$nb_tmp = $nb_trial for ($nb_digit = 0; $nb_digit <$nb_number; $nb_digit++){$a_trial_split[$nb_digit] =$nb_tmp - $nb_number * Math.floor($nb_tmp / $nb_number)$nb_tmp = Math.floor($nb_tmp /$nb_number)\n}\nfor($i = 0;$i < $nb_number;$i++)\nfor($j = 0;$j < $nb_number;$j++)\nif($i !=$j)\nif($a_trial_split[$i] == $a_trial_split[$j])\ncontinue __trying__\n$nb_successes += 1 } return$nb_successes\n}\n\ndocument.open()\ndocument.write(\"<input type = text onblur = alert(factorial(parseInt(this.value))))>\")\ndocument.close()\n\n\nTrolls:\n\n• Types hungarian notation, snake_case and unneccessary sigils. How evil is that?\n• Invented my own convention for jump labels, incompatible with the current use of this convention.\n• Every possible variable is accidentally global.\n• The solution is not O(n), not O(n!), but O(n^n). This alone would have sufficed to qualify here.\n• Incrementing a number and then converting as base-n is a bad way to generate a list of sequences. Even if we did want duplicates. Mysteriously breaking for n > 13 is not the only reason.\n• Of course we could have used number.toString(base), but that doesn't work for bases above 36. Yes, I know 36! is a lot, but still...\n• Did I mention Javascript had the modulus operator? Or Math.pow? No? Oh well.\n• Refusing to use ++ outside of for-loops makes it even more mysterious. Also, == is bad.\n• Deeply nested braceless looping constructs. Also, nested conditionals instead of AND. Also, the outer condition could have been avoided by ending the inner loop at $i. • The functions new Array, document.write (with friends) and alert (instead of a prompt or an input label) form a complete trifecta of function choice sins. Why is the input added dynamically after all? • Inline event handlers. Oh, and deep piping is hell to debug. • Unquoted attributes are fun, and the spaces around = make them even harder to read. • Did I already mention I hate semicolons? Ruby and WolframAlpha This solution uses the WolframAlpha REST API to calculate the factorial, with RestClient to fetch the solution and Nokogiri to parse it. It doesn't reinvent any wheels and uses well tested and popular technologies to get the result in the most modern way possible. require 'rest-client' require 'nokogiri' n = gets.chomp.to_i response = Nokogiri::XML(RestClient.get(\"http://api.wolframalpha.com/v2/query?input=#{n}!&format=moutput&appid=YOUR_APP_KEY\")) puts response.xpath(\"//*/moutput/text()\").text # Javascript Javascript is a functional programming language, this means you have to use functions for everything because its faster. function fac(n){ var r = 1, a = Array.apply(null, Array(n)).map(Number.call, Number).map(function(n){r = r * (n + 1);}); return r; } • Can you explain? Dec 31, 2013 at 15:55 • 1 is not a function. Your code is thus slow. Jan 2, 2014 at 13:59 • @ArlaudPierre r = -~(function(){}) will surely solve that. Jan 5, 2014 at 11:49 • I am on a work machine so I don't really want to install this language. Where can I find a version that will run in my browser? Jan 13, 2014 at 20:43 • I'm a bit scared of using Google because my boss has an account with them, and I don't want him to know I'm playing golf at work. I was looking for an extension for Firefox that could run Javascript, but I can't seem to find one. Some of my friends run Javascript on jsfiddle.net but that's using somebody else's electricity which is a bit like stealing. My mum said I shouldn't hang around with people like that, but they are my friends so what can I do? Anyway she sometimes takes more creamer than she needs. Thanks for the tips, I use Ctrl-Shift-J or K in Firefox. Disclaimer: #comment-trolling Jan 13, 2014 at 21:54 # Using Bogo-Sort in Java public class Factorial { public static void main(String[] args) { //take the factorial of the integers from 0 to 7: for(int i = 0; i < 8; i++) { System.out.println(i + \": \" + accurate_factorial(i)); } } //takes the average over many tries public static long accurate_factorial(int n) { double sum = 0; for(int i = 0; i < 10000; i++) { sum += factorial(n); } return Math.round(sum / 10000); } public static long factorial(int n) { //n! = number of ways to sort n //bogo-sort has O(n!) time, a good approximation for n! //for best results, average over several passes //create the list {1, 2, ..., n} int[] list = new int[n]; for(int i = 0; i < n; i++) list[i] = i; //mess up list once before we begin randomize(list); long guesses = 1; while(!isSorted(list)) { randomize(list); guesses++; } return guesses; } public static void randomize(int[] list) { for(int i = 0; i < list.length; i++) { int j = (int) (Math.random() * list.length); //super-efficient way of swapping 2 elements without temp variables if(i != j) { list[i] ^= list[j]; list[j] ^= list[i]; list[i] ^= list[j]; } } } public static boolean isSorted(int[] list) { for(int i = 1; i < list.length; i++) { if(list[i - 1] > list[i]) return false; } return true; } } This actually works, just very slowly, and it isn't accurate for higher numbers. PERL Factorial can be a hard problem. A map/reduce like technique -- just like Google uses -- can split up the math by forking off a bunch of processes and collecting the results. This will make good use of all those cores or cpus in your system on a cold winter's night. Save as f.perl and chmod 755 to make sure you can run it. You do have the Pathologically Eclectic Rubbish Lister installed, don't you? #!/usr/bin/perl -w use strict; use bigint; die \"usage: f.perl N (outputs N!)\" unless ($ARGV > 1);\nprint STDOUT &main::rangeProduct(1,$ARGV).\"\\n\"; sub main::rangeProduct { my($l, $h) = @_; return$l if ($l==$h);\nreturn $l*$h if ($l==($h-1));\n# arghhh - multiplying more than 2 numbers at a time is too much work\n# find the midpoint and split the work up :-)\nmy $m = int(($h+$l)/2); my$pid = open(my $KID, \"-|\"); if ($pid){ # parent\nmy $X = &main::rangeProduct($l,$m); my$Y = <$KID>; chomp($Y);\nclose($KID); die \"kid failed\" unless defined$Y;\nreturn $X*$Y;\n} else {\n# kid\nprint STDOUT &main::rangeProduct($m+1,$h).\"\\n\";\nexit(0);\n}\n}\n\n\nTrolls:\n\n• forks O(log2(N)) processes\n• doesn't check how many CPUs or cores you have\n• Hides lots of bigint/text conversions that occur in every process\n• A for loop is often faster than this code\n• TIL that in perl ARGV is actually the first argument and not the script! Jan 5, 2014 at 11:17\n• @plg I believe $0 might contain the script filename, but that is not the same as$ARGV\n– Paul\nJan 5, 2014 at 11:30\n• Yep, that's what I read. I just found it surprising that in perl it's not $ARGV because most languages I know a bit have it there Jan 5, 2014 at 11:35 ## Python Just an O(n!*n^2) algorithm to find the factorial. Base case handled. No overflows. def divide(n,i): res=0 while n>=i: res+=1 n=n-i return res def isdivisible(n,numbers): for i in numbers: if n%i!=0: return 0 n=divide(n,i) return 1 def factorial(n): res = 1 if n==0: return 1 #Handling the base case while not isdivisible(res,range(1,n+1)): res+=1 return res Well, there is an easy solution in Golfscript. You could use a Golfscript interpreter and run this code: .!+,1\\{)}%{*}/ Easy huh :) Good luck! • I don't know GolfScript, but this one disappoints me... Based on the other GolfScript examples on this site, I would have expected the answer to be ! Jan 11, 2014 at 7:19 • That is the negation operator. 0 becomes 1 and everything else becomes 0. Jan 11, 2014 at 12:20 ### Mathematica factorial[n_] := Length[Permutations[Table[k, {k, 1, n}]]] It doesn't seem work for numbers larger than 11, and factorial froze up my computer. # Ruby f=->(n) { return 1 if n.zero?; t=0; t+=1 until t/n == f[n-1]; t } The slowest one-liner I can imagine. It takes 2 minutes on an i7 processor to calculate 6!. The correct approach for these difficult math problems is a DSL. So I'll model this in terms of a simple language data DSL b a = Var x (b -> a) | Mult DSL DSL (b -> a) | Plus DSL DSL (b -> a) | Const Integer (b -> a) To write our DSL nicely, it's helpful to view it as a free monad generated by the algebraic functor F X = X + F (DSL b (F X)) -- Informally define + to be the disjoint sum of two sets We could write this in Haskell as Free b a = Pure a | Free (DSL b (Free b a)) I will leave it to the reader to derive the trivial implementation of join :: Free b (Free b a) -> Free b a return :: a -> Free b a liftF :: DSL b a -> Free b a Now we can descibe an operation to model a factorial in this DSL factorial :: Integer -> Free Integer Integer factorial 0 = liftF$ Const 1 id\nfactorial n = do\nfact' <- factorial (n - 1)\nliftF $Mult fact' n id Now that we've modeled this, we just need to provide an actual interpretation function for our free monad. denote :: Free Integer Integer -> Integer denote (Pure a) = a denote (Free (Const 0 rest)) = denote$ rest 0\n...\n\n\nAnd I'll leave the rest of the denotation to the reader.\n\nTo improve readability, it's sometimes helpful to present a concrete AST of the form\n\ndata AST = ConstE Integer\n| PlusE AST AST\n| MultE AST AST\n\n\nand then right a trivial reflection\n\nreify :: Free b Integer -> AST\n\n\nand then it's straightforward to recursively evaluate the AST.\n\n# Python\n\nBelow is a Python version of the solution, which is not limited to the 32 bit (or 64 bit on a very recent system) limit for integer numbers in Python. To get around this limitation, we shall use a string as input and output for the factorial routine and internally split the string in it's digits to be able to perform the multiplication.\n\nSo here is the code: the getDigits function splits a string representing a number in to its digits, so \"1234\" becomes [ 4, 3, 2, 1 ] (the reverse order just makes the increase and multiply functions simpler). The increase function takes such a list and increases it by one. As the name suggests, the multiply function multiplies, e.g. multiply([2, 1], ) returns [ 6, 3 ] because 12 times 3 is 36. This works in the same way as you would multiply something with pen and paper.\n\nThen finally, the factorial function uses these helper functions to calculate the actual factorial, for example factorial(\"9\") gives \"362880\" as its output.\n\nimport copy\n\ndef getDigits(n):\ndigits = []\nfor c in n:\ndigits.append(ord(c) - ord('0'))\n\ndigits.reverse()\nreturn digits\n\ndef increase(d):\nd += 1\ni = 0\nwhile d[i] >= 10:\nif i == len(d)-1:\nd.append(0)\n\nd[i] -= 10\nd[i+1] += 1\ni += 1\n\ndef multiply(a, b):\nsubs = [ ]\ns0 = [ ]\nfor bi in b:\n\ns = copy.copy(s0)\ncarry = 0\nfor ai in a:\nm = ai * bi + carry\ns.append(m%10)\ncarry = m//10\n\nif carry != 0:\ns.append(carry)\n\nsubs.append(s)\ns0.append(0)\n\ndone = False\nres = [ ]\ntermsum = 0\npos = 0\nwhile not done:\nfound = False\nfor s in subs:\nif pos < len(s):\nfound = True\ntermsum += s[pos]\n\nif termsum != 0:\nres.append(termsum%10)\ntermsum = termsum//10\ndone = True\nelse:\nres.append(termsum%10)\ntermsum = termsum//10\npos += 1\n\nwhile termsum != 0:\nres.append(termsum%10)\ntermsum = termsum//10\n\nreturn res\n\ndef factorial(x):\nif x.strip() == \"0\" or x.strip() == \"1\":\nreturn \"1\"\n\nfactorial = [ 1 ]\ndone = False\nnumber = [ 1 ]\nstopNumber = getDigits(x)\nwhile not done:\nif number == stopNumber:\ndone = True\n\nfactorial = multiply(factorial, number)\nincrease(number)\n\nfactorial.reverse()\n\nresult = \"\"\nfor c in factorial:\nresult += chr(c + ord('0'))\n\nreturn result\n\nprint factorial(\"9\")\n\n\n### Notes\n\nIn python an integer doesn't have a limit, so if you'd like to do this manually you can just do\n\nfac = 1\nfor i in range(2,n+1):\nfac *= i\n\n\nThere's also the very convenient math.factorial(n) function.\n\nThis solution is obviously far more complex than it needs to be, but it does work and in fact it illustrates how you can calculate the factorial in case you are limited by 32 or 64 bits. So while nobody will believe this is the solution you've come up with for this simple (at least in Python) problem, you can actually learn something.\n\n• There is no limit on integer numbers in Python... right? You might need to explain this better. Dec 30, 2013 at 8:49\n• @Riking Yes, in python there's no limit for integers. I've added a few notes to make it more clear.\n– brm\nDec 30, 2013 at 9:25\n\n## Python\n\nThe most reasonable solution is clearly to check through all numbers until you find the one which is the factorial of the given number.\n\nprint('Enter the number')\nn=int(input())\nx=1\nwhile True:\nx+=1\ntempx=int(str(x))\nd=True\nfor i in range(1, n+1):\nif tempx/i!=round(tempx/i):\nd=False\nelse:\ntempx/=i\nif d:\nprint(x)\nbreak\n\n\n# A most elegant recursive solution in C\n\nEvery one knows the most elegant solutions to factorials are recursive.\n\nFactorial:\n\n0! = 1\n1! = 1\nn! = n * (n - 1)!\n\n\nBut multiplication can also be defined recursively as successive additions.\n\nMultiplication:\n\nn * 0 = 0\nn * 1 = n\nn * m = n + n * (m - 1)\n\n\nAnd so can addition as successive incrementations.\n\nn + 0 = n\nn + 1 = (n + 1)\nn + m = (n + 1) + (m - 1)\n\n\nIn C, we can use ++x and --x to handle the primitives (x + 1) and (x - 1) respectively, so we have everything defined.\n\n#include <stdlib.h>\n#include <stdio.h>\n\n// For more elegance, use T for the type\ntypedef unsigned long T;\n\n// For even more elegance, functions are small enough to fit on one line\n\nT A(T n, T m) { return (m > 0)? A(++n, --m) : n; }\n\n// Multiplication\nT M(T n, T m) { return (m > 1)? A(n, M(n, --m)): (m? n: 0); }\n\n// Factorial\nT F(T n) { T m = n; return (m > 1)? M(n, F(--m)): 1; }\n\nint main(int argc, char **argv)\n{\nif (argc != 2)\nreturn 1;\n\nprintf(\"%lu\\n\", F(atol(argv)));\n\nreturn 0;\n}\n\n\nLet's try it out:\n\n$./factorial 0 1$ ./factorial 1\n1\n$./factorial 2 2$ ./factorial 3\n6\n$./factorial 4 24$ ./factorial 5\n120\n$./factorial 6 720$ ./factorial 7\n5040\n$./factorial 8 40320 Perfect, although 8! took a long time for some reason. Oh well, the most elegant solutions aren't always the fastest. Let's continue: $ ./factorial 9\n\n\nHmm, I'll let you know when it gets back...\n\n# Python\n\nAs @Matt_Sieker's answer indicated, factorials can be broken up into addition- why, breaking up tasks is the essence of programming. But, we can break that down into addition by 1!\n\ndef complicatedfactorial(n):\nreturn num + 1\ncopy = b\ncp2 = a\nwhile b != 0:\nb -= 1\ndef multiply(a,b):\ncopy = b\ncp2 = a\nwhile b != 0:\nif n == 0:\nreturn 1\nelse:\nreturn multiply(complicatedfactorial(n-1),n)\n\n\nI think this code guarantees an SO Error, because\n\n1. Recursion- warms it up\n\n2. Each layer generates calls to multiply\n\n3. which generates calls to addnumbers\n\n4. which generates calls to addby1!\n\nToo much functions,right?\n\n5!\n\n\nhttp://lmgtfy.com/?q=5!\n\n## TI-Basic 84\n\n:yumtcInputdrtb@gmail And:cReturnbunchojunk@Yahoo A!op:sEnd:theemailaddressIS Crazy ANSWER LOL\n\n\nIt really works :)\n\n# Javascript\n\nObviously the job of a programmer is to do as little work as possible, and to use as many libraries as possible. Therefore, we want to import jQuery and math.js. Now, the task is simple as this:\n\n$.alert=function(message){ alert(message); }$.factorial=function(number){\nreturn math.eval(number+\"!\");\n}\n$.factorial(10); ## Python With just a slight modification of the standard recursive factorial implementation, it becomes intolerably slow for n > 10. def factorial(n): if n in (0, 1): return 1 else: result = 0 for i in range(n): result += factorial(n - 1) return result # Bash #! /bin/bash function fact { if [[${1} -le 1 ]]; then\nreturn 1\nfi;\n\nfact $((${1} - 1))\nSTART=$(date +%s) for i in$(seq 1 $?); do sleep${1}; done\nEND=$(date +%s) RESULT=$(($END -$START))\nreturn $RESULT } fact${1}\necho $? Let's try to do it by the Monte Carlo Method. We all know that the probability of two random n-permutations being equal is exactly 1/n!. Therefore we can just check how many tests are needed (let's call this number b) until we get c hits. Then, n! ~ b/c. ## Sage, should work in Python, too def RandomPermutation(n) : t = range(0,n) for i in xrange(n-1,0,-1): x = t[i] r = randint(0,i) t[i] = t[r] t[r] = x return t def MonteCarloFactorial(n,c) : a = 0 b = 0 t = RandomPermutation(n) while a < c : t2 = list(t) t = RandomPermutation(n) if t == t2 : a += 1 b += 1 return round(b/c) MonteCarloFactorial(5,1000) # returns an estimate of 5! ## bash Factorials are easily determined with well known command line tools from bash. read -p \"Enter number: \"$n\nseq 1 $n | xargs echo | tr ' ' '*' | bc As @Aaron Davies mentioned in the comments, this looks much tidier and we all want a nice and tidy program, don't we? read -p \"Enter number: \"$n\nseq 1 $n | paste -sd\\* | bc • i recommend the highly-underrated paste command: seq 1$n | paste -sd\\* | bc Jan 14, 2014 at 2:33\n• @AaronDavies paste does look like a regular English word and with that easy to remember. Do we really want that? ;o) Jan 14, 2014 at 6:35" ]
[ null, "https://i.stack.imgur.com/56HGE.png", null, "https://i.stack.imgur.com/EcHZf.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78646106,"math_prob":0.96817374,"size":1400,"snap":"2023-14-2023-23","text_gpt3_token_len":382,"char_repetition_ratio":0.13467048,"word_repetition_ratio":0.020477816,"special_character_ratio":0.28642857,"punctuation_ratio":0.07857143,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976122,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-21T12:09:44Z\",\"WARC-Record-ID\":\"<urn:uuid:4983f749-4c86-481b-a862-990fda4cc17a>\",\"Content-Length\":\"417471\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf5672b4-3219-4b10-be14-98d3fc05868c>\",\"WARC-Concurrent-To\":\"<urn:uuid:12560059-7f77-42b8-9dd3-ad2b7acefc02>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://codegolf.stackexchange.com/questions/16615/how-do-i-find-the-factorial-of-a-positive-number/16673\",\"WARC-Payload-Digest\":\"sha1:BPBMACJ5YIRPYPFMBLA4AF3WWQLYZB7K\",\"WARC-Block-Digest\":\"sha1:3PGGXPD2N6U4R6XPHLAND7BIE4ZZVP2T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943695.23_warc_CC-MAIN-20230321095704-20230321125704-00068.warc.gz\"}"}
https://balbhartisolutions.com/maharashtra-board-11th-commerce-maths-solutions-chapter-1-ex-1-2-part-2/
[ "Balbharati Maharashtra State Board 11th Commerce Maths Solution Book Pdf Chapter 1 Partition Values Ex 1.2 Questions and Answers.\n\n## Maharashtra State Board 11th Commerce Maths Solutions Chapter 1 Partition Values Ex 1.2\n\nQuestion 1.\nCalculate D6 and P85 for the following data:\n79, 82, 36, 38, 51, 72, 68, 70, 64, 63\nSolution:\nThe given data can be arranged in ascending order as follows:\n36, 38, 51, 63, 64, 68, 70, 72, 79, 82\nHere, n = 10\nD6 = value of 6$$\\left(\\frac{n+1}{10}\\right)^{\\text {th }}$$ observation\n= value of 6$$\\left(\\frac{10+1}{10}\\right)^{\\text {th }}$$ observation\n= value of (6 × 1.1)th observation\n= value of (6.6)th observation\n= value of 6th observation + 0.6(value of 7th observation – value of 6th observation)\n= 68 + 0.6(70 – 68)\n= 68 + 0.6(2)\n= 68 + 1.2\n∴ D6 = 69.2\nP85 = value of $$\\left(\\frac{n+1}{100}\\right)^{\\text {th }}$$ observation\n= value of $$\\left(\\frac{10+1}{100}\\right)^{\\text {th }}$$ observation\n= value of (85 × 0. 11)th observation\n= value of (9.35)th observation\n= value of 9th observation + 0.35(value of 10th observation – value of 9th observation)\n= 19 + 0.35(82 – 79)\n= 79 + 0.35(3)\n= 79 + 1.05\n∴ P85 = 80.05", null, "Question 2.\nThe daily wages (in ₹) of 15 labourers are as follows:\n230, 400, 350, 200, 250, 380, 210, 225, 375, 180, 375, 450, 300, 350, 250\nCalculate D8 and P90.\nSolution:\nThe given data can be arranged in ascending order as follows:\n180, 200, 210, 225, 230, 250, 250, 300, 350, 350, 375, 375, 380, 400, 450\nHere, n = 15\nD8 = value of 8$$\\left(\\frac{n+1}{10}\\right)^{\\text {th }}$$ observation\n= value of 8$$\\left(\\frac{15+1}{10}\\right)^{\\text {th }}$$ observation\n= value of (8 × 1.6)th observation\n= value of (12.8)th observation\n= value of 12th observation – 0.8(value of 13th observation – value of 12th observation)\n= 375 + 0.8(380 – 375)\n= 375 + 0.8(5)\n= 375 + 4\n∴ D8 = 379\nP90 = value of 90$$\\left(\\frac{n+1}{10}\\right)^{\\text {th }}$$ observation\n= value of 90$$\\left(\\frac{15+1}{100}\\right)^{\\text {th }}$$ observation\n= value of (90 × 0.16)th observation\n= value of (14.4)th observation\n= value of 14th observation + 0.4 (value of 15th observation – value of 14th observation)\n= 400 + 0.4(450 – 400)\n= 400 + 0.4(50)\n= 400 + 20\n∴ P90 = 420\n\nQuestion 3.\nCalculate 2nd decile and 65th percentile for the following:", null, "Solution:\nWe construct the less than cumulative frequency table as given below:", null, "Here, n = 200\nD2 = value of 2$$\\left(\\frac{n+1}{10}\\right)^{\\text {th }}$$ observation\n= value of 2$$\\left(\\frac{200+1}{10}\\right)^{\\text {th }}$$ observation\n= value of (2 × 20.1)th observation\n= value of (40.2)th observation\nCumulative frequency which is just greater than (or equal to) 40.2 is 58.\n∴ D2 = 120\nP65 = value of 65$$\\left(\\frac{n+1}{100}\\right)^{\\text {th }}$$ observation\n= value of 65$$\\left(\\frac{200+1}{100}\\right)^{\\text {th }}$$ observation\n= value of (65 × 2.01)th observation\n= value of (130.65)th observation\nThe cumulative frequency which is just greater than (or equal to) 130.65 is 150.\n∴ P65 = 280", null, "Question 4.\nFrom the following data calculate the rent of the 15th, 65th, and 92nd house.", null, "Solution:\nArranging the given data in ascending order.", null, "Here, n = 100\nP15 = value of 15\n= value of 15$$\\left(\\frac{n+1}{100}\\right)^{\\text {th }}$$ observation\n= value of 15$$\\left(\\frac{100+1}{100}\\right)^{\\text {th }}$$ observation\n= value of (15 × 1.01 )th observation\n= value of (15.15)th observation\nCumulative frequency which is just greater than (or equal to) 15.15 is 25.\n∴ P15 = 11000\nP65 = value of 65$$\\left(\\frac{n+1}{100}\\right)^{\\text {th }}$$observation\n= value of 65$$\\left(\\frac{100+1}{100}\\right)^{\\text {th }}$$ observation\n= value of (65 × 1.01)th observation\n= value of (65.65)th observation\nCumulative frequency which is just greater than (or equal to) 65.65 is 70.\n∴ P65 = 14000\nP92 = value of 92$$\\left(\\frac{n+1}{100}\\right)^{\\text {th }}$$ observation\n= value of 92$$\\left(\\frac{100+1}{100}\\right)^{\\text {th }}$$ observation\n= value of (92 × 1.01)th observation\n= value of (92.92)th observation\nCumulative frequency which is just greater than (or equal to) 92.92 is 98.\n∴ P92 = 17000\n\nQuestion 5.\nThe following frequency distribution shows the weight of students in a class.", null, "(a) Find the percentage of students whose weight is more than 50 kg.\n(b) If the weight column provided is of mid values then find the percentage of students whose weight is more than 50 kg.\nSolution:\n(a) Let the percentage of students weighing less than 50 kg be x.\n∴ Px = 50", null, "From the table, out of 20 students, 84 students have their weight less than 50 kg.\n∴ Number of students weighing more than 50 kg = 120 – 84 = 36\n∴ Percentage of students having there weight more than 50 kg = $$\\frac{36}{120}$$ × 100 = 30%\n\n(b) The difference between any two consecutive mid values of weight is 5 kg.\nThe class intervals must of width 5, with 40, 45,….. as their mid values.\n∴ The class intervals will be 37.5 – 42.5, 42.5 – 47.5, etc.\nWe construct the less than cumulative frequency table as given below:", null, "Here, N = 120\nLet Px = 50\nThe value 50 lies in the class 47.5 – 52.5\n∴ L = 47.5, h = 5, f = 29, c.f. = 55", null, "∴ x = 58 (approximately)\n∴ 58% of students are having weight below 50 kg.\n∴ Percentage of students having weight above 50 kg is 100 – 58 = 42\n∴ 42% of students are having weight above 50 kg.", null, "Question 6.\nCalculate D4 and P48 from the following data:", null, "Solution:\nThe difference between any two consecutive mid values is 5, the width of class interval = 5\n∴ Class interval with mid-value 2.5 is 0 – 5\nClass interval with mid value 7.5 is 5 – 10, etc.\nWe construct the less than cumulative frequency table as given below:", null, "Here, N = 100\nD4 class = class containing $$\\left(\\frac{4 \\mathrm{N}}{10}\\right)^{\\text {th }}$$ observation\n∴ $$\\frac{4 \\mathrm{N}}{10}=\\frac{4 \\times 100}{10}$$ = 40\nCumulative frequency which is just greater than (or equal to) 40 is 50.\n∴ D4 lies in the class 10 – 15.\n∴ L = 10,h = 5, f = 25, c.f. = 25\n∴ D4 = $$\\mathrm{L}+\\frac{\\mathrm{h}}{\\mathrm{f}}\\left(\\frac{4 \\mathrm{~N}}{10}-\\text { c.f. }\\right)$$\n= 10 + $$\\frac{5}{25}$$ (40 – 25)\n= 10 + $$\\frac{1}{5}$$ (15)\n= 10 + 3\n∴ D4 = 13\nP48 class = class containing $$\\left(\\frac{48 \\mathrm{~N}}{100}\\right)^{\\text {th }}$$ observation\n∴ $$\\frac{48 \\mathrm{~N}}{100}=\\frac{48 \\times 100}{100}$$ = 48\nCumulative frequency which is just greater than (or equal to) 48 is 50.\n∴ P48 lies in the class 10 – 15.\n∴ L = 10, h = 5, f = 25, c.f. = 25\n∴ P48 = $$\\mathrm{L}+\\frac{\\mathrm{h}}{\\mathrm{f}}\\left(\\frac{48 \\mathrm{~N}}{100}-\\text { c.f. }\\right)$$\n= 10 + $$\\frac{5}{25}$$ (48 – 25)\n= 10 + $$\\frac{1}{5}$$ (23)\n= 10 + 4.6\n∴ P48 = 14.6\n\nQuestion 7.\nCalculate D9 and P20 of the following distribution.", null, "Solution:\nWe construct the less than cumulative frequency table as given below:", null, "Here, N = 240\nD9 class = class containing $$\\left(\\frac{9 \\mathrm{~N}}{10}\\right)^{\\mathrm{th}}$$ observation\n∴ $$\\frac{9 \\mathrm{~N}}{10}=\\frac{9 \\times 240}{10}$$ = 216\nCumulative frequency which is just greater than (or equal to) 216 is 225.\n∴ D9 lies in the class 80 – 100.\n∴ L = 80, h = 20, f = 90, c.f. = 135\n∴ D9 = $$L+\\frac{h}{f}\\left(\\frac{9 N}{10}-c . f .\\right)$$\n= 80 + $$\\frac{20}{90}$$(216 – 135)\n= 80 + $$\\frac{2}{9}$$(81)\n= 80 + 18\n∴ D9 = 98\nP20 class = class containing $$\\left(\\frac{20 \\mathrm{~N}}{100}\\right)^{\\text {th }}$$ observation\n∴ $$\\frac{20 \\mathrm{~N}}{100}=\\frac{20 \\times 240}{100}$$ = 48\nCumulative frequency which is just greater than (or equal to) 48 is 50.\n∴ P20 lies in the class 40 – 60.\n∴ L = 40, h = 20, f = 35, c.f. = 15", null, "∴ P20 = 58.86", null, "Question 8.\nWeekly wages for a group of 100 persons are given below:", null, "D3 for this group is ₹ 1100. Calculate the missing frequencies.\nSolution:\nLet a and b be the missing frequencies of class 500 – 1000 and class 2000 – 2500 respectively.\nWe construct the less than cumulative frequency table as given below:", null, "Here, N = 62 + a + b\nSince, N = 100\n∴ 62 + a + b = 100\n∴ a + b = 38 …..(i)\nGiven, D3 = 1100\n∴ D3 lies in the class 1000 – 1500.\n∴ L = 1000, h = 500, f = 25, c.f. = 7 + a\n∴ $$\\frac{3 \\mathrm{~N}}{10}=\\frac{3 \\times 100}{10}=30$$\n∴ D3 = $$\\mathrm{L}+\\frac{\\mathrm{h}}{\\mathrm{f}}\\left(\\frac{3 \\mathrm{~N}}{10}-\\mathrm{c} . \\mathrm{f} .\\right)$$\n∴ 1100 = 1000 + $$\\frac{500}{25}$$ [30 – (7 + a)]\n∴ 1100 – 1000 = 20(30 – 7 – a)\n∴ 100 = 20(23 – a)\n∴ 100 = 460 – 20a\n∴ 20a = 460 – 100\n∴ 20a = 360\n∴ a = 18\nSubstituting the value of a in equation (i), we get\n18 + b = 38\n∴ b = 38 – 18 = 20\n∴ 18 and 20 are the missing frequencies of the class 500 – 1000 and class 2000 – 2500 respectively.\n\nQuestion 9.\nThe weekly profit (in rupees) of 100 shops are distributed as follows:", null, "Find the limits of the profit of middle 60% of the shops.\nSolution:\nTo find the limits of the profit of the middle 60% of the shops, we have to find P20 and P80.\nWe construct the less than cumulative frequency table as given below:", null, "Here, N = 100\nP20 class = class containing $$\\left(\\frac{20 \\mathrm{N}}{100}\\right)^{\\text {th }}$$ observation\n∴ $$\\frac{20 \\mathrm{N}}{100}=\\frac{20 \\times 100}{100}=20$$\nCumulative frequency which is just greater than (or equal to) 20 is 26.\n∴ P20 lies in the class 1000 – 2000.\n∴ L = 1000, h = 1000, f = 16, c.f. = 10\n∴ P20 = $$L+\\frac{h}{f}\\left(\\frac{20 \\mathrm{~N}}{100}-\\text { c.f. }\\right)$$\n= 1000 + $$\\frac{1000}{16}$$ (20 – 10)\n= 1000 + $$\\frac{125}{2}$$ (10)\n= 1000 + 625\n∴ P20 = 1625\nP80 class = class containing $$\\left(\\frac{80 \\mathrm{~N}}{100}\\right)^{\\text {th }}$$ observation\n∴ $$\\frac{80 \\mathrm{~N}}{100}=\\frac{80 \\times 100}{100}=80$$\nCumulative frequency which is just greater than (or equal to) 80 is 92.\n∴ P80 lies in the class 4000 – 5000.\n∴ L = 4000, h = 1000, f = 20, c.f. = 72\n∴ P80 = $$L+\\frac{h}{f}\\left(\\frac{80 \\mathrm{~N}}{100}-\\text { c.f. }\\right)$$\n= 4000 + $$\\frac{1000}{20}$$(80 – 72)\n= 4000 + 50(8)\n= 4000 + 400\n∴ P80 = 4400\n∴ the profit of middle 60% of the shops lie between the limits ₹ 1,625 to ₹ 4,400.", null, "Question 10.\nIn a particular factory, workers produce various types of output units. The following distribution was obtained:", null, "Find the percentage of workers who have produced less than 82 output units.\nSolution:\nSince the given data is not continuous, we have to convert it into a continuous form by subtracting 0.5 from the lower limit and adding 0.5 to the upper limit of every class interval.\n∴ the class intervals will be 69.5 – 74.5, 74.5 – 79.5, etc.\nWe construct the less than cumulative frequency table as given below:", null, "Here, N = 445\nLet Px = 82\nThe value 82 lies in the class 79.5 – 84.5\n∴ L = 79.5, h = 5, f = 50, c.f. = 85", null, "∴ 24.72% of workers produced less than 82 output units." ]
[ null, "https://maharashtraboardsolutions.in/wp-content/uploads/2021/07/Maharashtra-Board-Solutions.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q3.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q3.1.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2021/07/Maharashtra-Board-Solutions.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q4.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q4.1.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q5.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q5.1.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q5.2.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q5.3.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2021/07/Maharashtra-Board-Solutions.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q6.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q6.1.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q7.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q7.1.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q7.2.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2021/07/Maharashtra-Board-Solutions.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q8.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q8.1.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q9.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q9.1.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2021/07/Maharashtra-Board-Solutions.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q10.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q10.1.png", null, "https://maharashtraboardsolutions.in/wp-content/uploads/2022/02/Maharashtra-Board-11th-Commerce-Maths-Solutions-Chapter-1-Partition-Values-Ex-1.2-Q10.2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7597934,"math_prob":0.99999404,"size":10545,"snap":"2023-14-2023-23","text_gpt3_token_len":3913,"char_repetition_ratio":0.203586,"word_repetition_ratio":0.18729816,"special_character_ratio":0.4550972,"punctuation_ratio":0.12922375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999949,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,null,null,4,null,4,null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,null,null,4,null,4,null,4,null,4,null,4,null,null,null,4,null,4,null,4,null,4,null,null,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T01:09:17Z\",\"WARC-Record-ID\":\"<urn:uuid:9aa141a1-910e-4f2f-b541-794c379defa2>\",\"Content-Length\":\"51845\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1931991a-9884-4829-b603-016b7f8cca13>\",\"WARC-Concurrent-To\":\"<urn:uuid:aaadabfb-5bbf-41a9-8867-16ad1ad1e6a1>\",\"WARC-IP-Address\":\"172.105.50.94\",\"WARC-Target-URI\":\"https://balbhartisolutions.com/maharashtra-board-11th-commerce-maths-solutions-chapter-1-ex-1-2-part-2/\",\"WARC-Payload-Digest\":\"sha1:KOPEYXVW65E334YKTPQLRSAYFMIB2OWQ\",\"WARC-Block-Digest\":\"sha1:EMHCCLGYRCZIWHMXPTHC2H7IBNPOMDMR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648911.0_warc_CC-MAIN-20230603000901-20230603030901-00669.warc.gz\"}"}
http://fractionslearningpathways.ca/tasks/curriculumConnectionsUnitC
[ "Unit C\n\n# Curriculum Connections\n\nUNIT C Partition unit fractions to create smaller unit fractions\n1\n• divide whole objects into parts and identify and describe, through investigation, equal-sized parts of the whole, using fractional names (e.g., halves; fourths or quarters).\n2\n• determine, through investigation using concrete materials, the relationship between the number of fractional parts of a whole and the size of the fractional parts (e.g., a paper plate divided into fourths has larger parts than a paper plate divided into eighths) (Sample problem: Use paper squares to show which is bigger, one half of a square or one fourth of a square.).\n4\n• demonstrate and explain the relationship between equivalent fractions, using concrete materials (e.g., fraction circles, fraction strips, pattern blocks) and drawings;\n4\n• compare and order fractions (i.e., halves, thirds, fourths, fifths, tenths) by considering the size and the number of fractional parts (e.g., 45 is greater than 35 because there are more parts in 45; 14 is greater than 15 because the size of the part is larger in 14);\n5\n• demonstrate and explain the concept of equivalent fractions, using concrete materials (e.g., use fraction strips to show that 34 is equal to 912);\n6\n• represent, compare, and order fractional amounts with unlike denominators, including proper and improper fractions and mixed numbers, using a variety of tools and using standard fractional notation;\n6\n• determine and explain, through investigation using concrete materials, drawings, and calculators, the relationships among fractions, decimal numbers, and percents\n7\n• add and subtract fractions with simple like and unlike denominators, using a variety of tools and algorithms;\n8\n• solve problems involving addition, subtraction, multiplication, and division with simple fractions." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8929908,"math_prob":0.98904836,"size":1773,"snap":"2019-43-2019-47","text_gpt3_token_len":403,"char_repetition_ratio":0.15206331,"word_repetition_ratio":0.05109489,"special_character_ratio":0.2318105,"punctuation_ratio":0.17417417,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99499726,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T09:55:42Z\",\"WARC-Record-ID\":\"<urn:uuid:15117454-9bb8-471c-86fe-2a362ce5d97b>\",\"Content-Length\":\"13879\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25147280-a4b2-4f17-94d2-626bb68b262e>\",\"WARC-Concurrent-To\":\"<urn:uuid:a6876d48-6bfa-436f-b98a-ec9457af1a4b>\",\"WARC-IP-Address\":\"35.182.195.195\",\"WARC-Target-URI\":\"http://fractionslearningpathways.ca/tasks/curriculumConnectionsUnitC\",\"WARC-Payload-Digest\":\"sha1:SO77Y2LANMAUIDD3VDB6PUEEU6G5SO7G\",\"WARC-Block-Digest\":\"sha1:NGNRCDEIBAWEKMRKRZAZ73QACY65OSZQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668618.8_warc_CC-MAIN-20191115093159-20191115121159-00272.warc.gz\"}"}
https://www.groundai.com/project/the-variational-relaxation-algorithm-for-finding-quantum-bound-states/
[ "The variational-relaxation algorithm for finding quantum bound states\n\nThe variational-relaxation algorithm for finding quantum bound states\n\nAbstract\n\nI describe a simple algorithm for numerically finding the ground state and low-lying excited states of a quantum system. The algorithm is an adaptation of the relaxation method for solving Poisson’s equation, and is fundamentally based on the variational principle. It is especially useful for two-dimensional systems with nonseparable potentials, for which simpler techniques are inapplicable yet the computation time is minimal. (To be published in the American Journal of Physics.)\n\nI Introduction\n\nSolving the time-independent Schrödinger equation for an arbitrary potential energy function is difficult. There are no generally applicable analytical methods. In one dimension it is straightforward to integrate the equation numerically, starting at one end of the region of interest and working across to the other. For bound-state problems for which the energy is not known in advance, the integration must be repeated for different energies until the correct boundary condition at the other end is satisfied; this algorithm is called the shooting method.(1); (2); (3); (4)\n\nFor a nonseparable(5) potential in two or more dimensions, however, the shooting method does not work because there are boundary conditions that must be satisfied on all sides. One can still use matrix methods,(6); (7); (8) but the amount of computation required can be considerable and the diagonalization routines are mysterious black boxes to most students.\n\nThis paper describes a numerical method for obtaining the ground state and low-lying excited states of a bound system in any reasonably small number of dimensions. The algorithm is closely related to the relaxation method(9); (10); (11) for solving Poisson’s equation, with the complication that the equation being solved depends on the energy, which is not known in advance. The algorithm does not require any sophisticated background in quantum mechanics or numerical analysis. It is reasonably intuitive and easy to code.\n\nThe following section explains the most basic version of the algorithm, while Sec. III derives the key formula using the variational method. Section IV presents a two-dimensional implementation of the algorithm in Mathematica. Section V generalizes the algorithm to find low-lying excited states, and Sec. VI presents two nontrivial examples. The last two sections briefly discuss other related algorithms and how such calculations can be incorporated into the undergraduate physics curriculum.\n\nIi The algorithm\n\nA standard exercise in computational physics(9); (10); (11) is to solve Poisson’s equation,\n\n ∇2ϕ(→r)=−ρ(→r), (1)\n\nwhere is a known function, by the method of relaxation: Discretize space with a rectangular grid, start with an arbitrary function that matches the desired boundary conditions, and repeatedly loop over all the grid points that are not on the boundaries, adjusting each value in relation to its nearest neighbors to satisfy a discretized version of Poisson’s equation. To obtain that discretized version, write each term of the Laplacian operator in the form\n\n ∂2ϕ∂x2≈ϕ(→r+δ^x)+ϕ(→r−δ^x)−2ϕ(→r)δ2, (2)\n\nwhere is the grid spacing and is a unit vector in the direction. Solving the discretized Poisson equation for then gives the needed formula,\n\n ϕ0=¯¯¯ϕnn+12dρ0δ2, (3)\n\nwhere and are the values of and at (the current grid location), is the dimension of space, and is the average of the values at the nearest-neighbor grid locations. As this formula is applied repeatedly at all grid locations, the array of values “relaxes” to the desired self-consistent solution of Poisson’s equation that matches the fixed boundary conditions, to an accuracy determined by the grid resolution.\n\nWhat is far less familiar is that this method can be adapted to solve the time-independent Schrödinger equation. To see the correspondence, write Schrödinger’s equation with only the Laplacian operator term on the left-hand side:\n\n ∇2ψ(→r)=−2(E−V(→r))ψ(→r), (4)\n\nwhere is the energy eigenvalue, is the given potential energy function, and I am using natural units in which and the particle mass are equal to 1. Discretizing the Laplacian gives a formula of the same form as Eq. (3),\n\n ψ0=¯¯¯¯ψnn+1d(E−V0)ψ0δ2, (5)\n\nwhere the subscripts carry the same meanings as in Eq. (3). The appearance of on the right-hand side creates no difficulty at all, because we can solve algebraically for :\n\n ψ0=¯¯¯¯ψnn1−(E−V0)δ2/d. (6)\n\nThe more pressing question is what to do with , the energy eigenvalue that we do not yet know. The answer is that we can replace it with the energy expectation value\n\n ⟨E⟩=⟨ψ|H|ψ⟩⟨ψ|ψ⟩, (7)\n\nwhere is the Hamiltonian operator. We then update this expectation value after each step in the calculation. (The denominator in Eq. (7) is needed because the algorithm does not maintain the normalization of .) As the relaxation process proceeds will steadily decrease, and we will eventually obtain a self-consistent solution for the ground-state energy and wave function.\n\nThe inner products in Eq. (7) are integrals, but we can compute them to sufficient accuracy as ordinary sums over the grid locations. The denominator is simply\n\n ⟨ψ|ψ⟩=∑iψ2iδd, (8)\n\nwhere the index runs over all grid locations and I have assumed that is real. Similarly, the potential energy contribution to the numerator is\n\n ⟨ψ|V|ψ⟩=∑iViψ2iδd. (9)\n\nTo obtain the kinetic energy () contribution we again discretize the derivatives as in Eq. (2), arriving at the expression\n\n ⟨ψ|K|ψ⟩=−d∑i(ψi¯¯¯¯ψnn−ψ2i)δd−2. (10)\n\nEach of these inner products must be updated after every change to one of the values, but there is no need to evaluate them from scratch. When we change to , the corresponding changes to the inner products are\n\n Δ⟨ψ|ψ⟩ =(ψ20,new−ψ20,old)δd, (11) Δ⟨ψ|V|ψ⟩ =V0(ψ20,new−ψ20,old)δd, (12) Δ⟨ψ|K|ψ⟩ =−2d(ψ0,new−ψ0,old)¯¯¯¯ψnnδd−2 +d(ψ20,new−ψ20,old)δd−2, (13)\n\nwhere the factor of 2 in the first term of Eq. (13) arises because there is an identical contribution of this form from the terms in the sum of Eq. (10) in which is one of the neighboring grid locations.\n\nThe algorithm, then, is as follows:\n\n1. Discretize space into a rectangular grid, placing the boundaries far enough from the region of interest that the ground-state wave function will be negligible there.\n\n2. Initialize the array of values to represent a smooth, nodeless function such as the ground state of an infinite square well or a harmonic oscillator. All the values on the boundaries should be zero and will remain unchanged.\n\n3. Use Eqs. (8)–(10) to calculate , , and for the initial array.\n\n4. Loop over all interior grid locations, setting the value at each location to\n\n ψ0=¯¯¯¯ψnn1−(⟨E⟩−V0)δ2/d. (14)\n\nAlso use Eqs. (11)–(13) to compute the changes to and that result from this change to , and use these quantities to update the value of before proceeding to the next grid location.\n\n5. Repeat step 4 until and no longer change, within the desired accuracy.\n\nThe simplest procedure, as just described, is to update each value “in place,” so that a change at one grid location immediately affects the calculation for the next grid location. In the terminology of relaxation methods, this approach is called the Gauss-Seidel algorithm.(9); (10); (11)\n\nIii Variational interpretation\n\nIn the previous section I asserted, but did not prove, that will steadily decrease during the relaxation process. To see why this happens, it is instructive to derive Eq. (14) using the variational method of quantum mechanics.(12) The idea is to treat each local value as a parameter on which the function depends, and repeatedly adjust these parameters, one at a time, to minimize the energy expectation value . So let us consider how the expression for in Eq. (7) depends on .\n\nFocusing first on the denominator of Eq. (7), we discretize the integral as in Eq. (8), but rewrite the sum as\n\n ⟨ψ|ψ⟩=ψ20δd+s, (15)\n\nwhere is an abbreviation for the terms in the sum that do not depend on . Similarly, the discretization of Eqs. (9) and (10) allows us to write the numerator of Eq. (7) as\n\n ⟨ψ|H|ψ⟩=−2dψ0¯¯¯¯ψnnδd−2+dψ20δd−2+V0ψ20δd+h, (16)\n\nwhere the factor of 2 in the first term is the same as in Eq. (13) and is an abbreviation for all the terms that do not depend on .", null, "Figure 1: Mathematica code to implement the basic variational-relaxation algorithm for a two-dimensional quantum system. Here the potential energy function is for a harmonic oscillator, for which the solutions are known analytically.\n\nInserting Eqs. (15) and (16) into Eq. (7) gives\n\n ⟨E⟩=h−2dδd−2ψ0¯¯¯¯ψnn+(dδd−2+V0δd)ψ20s+δdψ20, (17)\n\nwhere I have written the and terms first because they are larger than the others by a factor on the order of the total number of lattice points.(13) We are looking for the value of that minimizes this expression. Differentiating with respect to and setting the result equal to 0 gives a complicated equation, but in the limit of a large lattice it is a valid approximation to keep only the leading terms in and . With that approximation, after some algebra, the extremization condition reduces to\n\n ψ0=¯¯¯¯ψnn1−[(h/s)−V0]δ2/d. (18)\n\nThe ratio is equal to in the limit of an infinite lattice, so this result is effectively the same as Eq. (14). By similarly focusing on the leading nontrivial terms in powers of and it is straightforward to show that this extremum is a minimum, if the lattice spacing is sufficiently small.\n\nWe can therefore be confident that each step of the algorithm will reduce the value of . This result suggests, but does not prove, that the algorithm will converge to the system’s ground state. In fact every energy eigenfunction is a stationary point of the energy functional,(12) so there can be situations in which the algorithm converges (or almost converges) to an excited state instead of the ground state. But the excited states are unstable to small perturbations, and they can be avoided entirely by choosing an initial wave function that is sufficiently similar to the ground state. Once the algorithm brings below every excited-state energy, the ground state is the only possible result after sufficiently many iterations.(14)\n\nIv An implementation\n\nFigure 1 shows a basic implementation of the variational-relaxation algorithm in Mathematica,(15) for a two-dimensional potential well. Translating this example to other computer languages should be straightforward.\n\nThe first four lines of the code define the resolution of the lattice (here ), initialize the wave function to the ground state of an infinite square well, and then plot the initial wave function using a custom plotting function that maps the array of lattice points to a square in the plane extending from 0 to 1 in each direction. Notice that the array size is one element larger in each dimension than the nominal lattice size ( in this case), so that the edges can be mapped to exactly 0 and 1, where the wave function will be held fixed throughout the calculation. Notice also that an offset of 1 is required when indexing into the array, because Mathematica array indices start at 1 rather than 0.\n\nLines 5 and 6 define and plot an array of values to represent the potential energy function. Here, for testing purposes, this function is a harmonic oscillator potential with a classical angular frequency of 40 (in natural units) in the direction and 60 in the direction. The rest of the code is sufficiently versatile, however, that almost any potential energy function can be used, as long as its discrete representation is reasonably accurate.\n\nLines 7–13 compute the inner products and according to Eqs. (8) through (10). Because the width of the two-dimensional space is one unit, the lattice spacing is the reciprocal of the lattice size (1/50). Line 14 displays the initial value of .\n\nThe algorithm itself is implemented in the relax function (lines 15–24), whose argument is the number of times to iterate the algorithm for each lattice site. For each iteration step we loop over all the lattice sites and for each site, save the old wave function value, calculate the new value from Eq. (14), and update the inner products and using Eqs. (11)–(13). When everything is finished we display the final value of and then plot the final wave function. To actually execute this function we type something like relax for 100 iteration steps. We can do this repeatedly, checking the results for convergence.\n\nFor this harmonic oscillator example using a lattice, 100 iteration steps results in an energy value of 49.97, within less than 0.1% of the analytically known value of 50 (that is, times the sum of the and frequencies). After another 100 steps the energy converges to 49.94, slightly below the analytical value due to the lattice discretization. The calculated wave function has the familiar Gaussian shape. On a typical laptop computer, Mathematica can execute 100 iteration steps for a lattice in just a few seconds. This execution speed, along with the brevity of the code, brings two-dimensional calculations of this type well within the reach of a typical undergraduate homework assignment.\n\nV Extensions\n\nAn easy trick for speeding up the algorithm is to use over-relaxation,(9); (10); (11) in which we try to anticipate subsequent iterations by “stretching” each change to a value by a factor between 1 and 2. If we call the value of expression (14) , then the formula to update becomes\n\n ψ0,new=ψ0,old+ω(ψ0,% nominal−ψ0,old), (19)\n\nwhere the “stretch factor” is called the over-relaxation parameter. Figure 2 shows how the rate of convergence depends on , for the two-dimensional harmonic oscillator example described in the preceding section.", null, "Figure 2: The energy expectation value ⟨E⟩ as a function of the iteration number, for the two-dimensional harmonic oscillator example used in Sec. IV. The different data sets are for different values of the over-relaxation parameter ω. The basic algorithm without over-relaxation corresponds to ω=1. In this example, with a lattice size of 50×50, the optimum ω is about 1.8.\n\nAfter finding the ground state of a particular system, we can go on to find its first excited state with only a minor modification of the algorithm. The idea is the same as with other variational solutions,(12) namely, to restrict the trial wave function to be orthogonal to the ground state. To do this, we periodically project out any contribution of the ground state to the trial function during the course of the calculation. More explicitly, the procedure is as follows:\n\n1. Normalize and save the just-determined ground-state wave function as .\n\n2. Initialize a new trial wave function that crudely resembles the first excited state, with a single node. The first excited state of an infinite square well or a harmonic oscillator would be a reasonable choice. It may be necessary to try different orientations for the node of this function.\n\n3. Proceed as in the basic algorithm described in Sec. II, but after each loop through all the grid locations, calculate the component of the trial function as the inner product\n\n ⟨ψgs|ψ⟩=∑iψgs,iψiδd. (20)\n\nMultiply this inner product by and subtract the result from (point by point). Then recalculate the inner products and before proceeding to the next iteration.\n\nThe orientation of the initial state’s node matters because we want it to resemble the first excited state more than the second. For example, the first excited state of the anisotropic harmonic oscillator potential used in Sec. IV has a node line parallel to the  axis, so a good choice for the initial state would be , rather than the orthogonal state . If the latter state is used the algorithm will become stuck, for a rather long time, on the second excited state (with energy 110) before finally converging to the first excited state (with energy 90).\n\nAfter finding the first excited state we can find the second excited state in a similar way, this time projecting out both the ground-state contribution and the first-excited-state contribution after each loop through all the grid locations. We could then go on to find the third excited state and so on, but if many states are needed it may be easier to use matrix methods.(6); (7); (8)\n\nVi Examples", null, "Figure 3: A “Flaming W” potential energy well (upper left) and the three lowest-energy wave functions and corresponding energies for a particle trapped in this well. The potential energy is zero inside the W and +200 (in natural units) in the flat area surrounding it. The grid resolution is 64×64.\n\nTo illustrate the versatility of the variational-relaxation algorithm, Fig. 3 shows results for an intricate but contrived potential energy well based on an image of the Weber State University “Flaming W” logo.(16) As before, the units are such that , the particle mass, and the width of the square grid region are all equal to 1. In these units the sine-wave starting function (that is, the ground state of an infinitely deep two-dimensional box of this size) has a kinetic energy of , so the well depth of 200 is substantially larger than this natural energy scale. All three of the states shown are bound, with energies less than 200. As expected, the ground-state wave function spreads to fill the oddly shaped potential well, but is peaked near the center. The two lowest excited states are relatively close together in energy, with nodal curves that are roughly orthogonal to each other.", null, "Figure 4: The interparticle potential energy (upper left) and the three lowest-energy wave functions and corresponding energies for a pair of equal-mass but distinguishable particles trapped in a one-dimensional infinite square well, repelling each other according to Eq. (21). The grid resolution is 50×50 and the maximum potential energy is 80 in natural units.\n\nFor a second example, note that the Schrödinger equation for a single particle in two dimensions is mathematically equivalent to that for two equal-mass particles in one dimension. We can therefore adapt our results to the latter system by renaming . Consider, then, a pair of equal-mass (but distinguishable) particles trapped in a one-dimensional infinite square well, exerting a repulsive force on each other.(17) A smooth potential for modeling such a force is a Gaussian,(18)\n\n V(x1,x2)=Vmaxe−(x1−x2)2/a2, (21)\n\nand for illustration purposes I will take and in natural units. This potential and its three lowest-energy stationary states are shown in Fig. 4. Interpreting these two-particle wave function plots takes a little practice; for example, the two peaks in the ground-state wave function correspond not to the two particles, but rather to two equally probable configurations for both particles, one with particle 1 near and particle 2 near , and the other with the particles interchanged. This is an “entangled” state, because a measurement of one particle’s position changes the probability distribution for the other particle’s position. Notice that the first excited state, with a node along , has an almost identical probability density and only slightly more energy, as is typical of double-well potentials. In contrast, the second excited state tends to put one particle or the other near the middle of the well and has considerably more energy.\n\nThese two examples are merely meant to suggest the wide range of possible uses of the variational-relaxation algorithm. The algorithm should be applicable to real-world systems such as quantum dots(7); (19) and other nano-structures that can be modeled as two-dimensional or three-dimensional potential wells. For a system of two particles in one dimension, one could investigate other interaction potentials, repulsive or attractive, as well as other confining potentials.\n\nVii Related algorithms\n\nThe algorithm described in this paper cannot possibly be new, because it is such a minor adaptation of the familiar relaxation algorithm for Poisson’s equation. However, I have been unable to find a published description of it.(20); (21)\n\nGiordano and Nakanishi(22) describe a closely related algorithm that also uses a rectangular lattice and the variational principle, but takes a Monte Carlo approach. Instead of looping over all lattice points in order, they choose successive lattice points at random. And instead of computing the value of that minimizes using Eq. (14), they consider a random change to , compute the effect of this change on , and then accept the change if will decrease but reject it if would increase. This Monte Carlo approach inspired the algorithm described in this paper. However, the Monte Carlo algorithm is much less efficient, at least when fixed, uniform distributions are used for the random numbers.\n\nKoonin and Meredith(23) describe an alternative algorithm that evolves an initial trial function in imaginary time, according to the Schrödinger-like equation\n\n ∂ψ∂τ=−Hψ, (22)\n\nwhose formal solution is\n\n ψ(τ)=e−Hτψ(0). (23)\n\nIf we imagine expanding as a linear combination of eigenfunctions of the Hamiltonian , then we see that the ground-state term in the expansion decreases the most slowly (or grows the most rapidly if its energy is negative), so eventually this evolution in “fake time” will suppress all the remaining terms in the expansion and yield a good approximation to the ground state.(24) An advantage of the imaginary-time approach is that its validity rests on the fundamental argument just given, rather than on the more subtle variational principle.\n\nThe speed and coding complexity of an imaginary-time algorithm depend on the specific method used for the imaginary-time evolution. Koonin and Meredith use a basic first-order forward-time Euler integration, resulting in an algorithm that is just as easy to code as the variational-relaxation algorithm, and that requires about the same execution time as the latter without over-relaxation. Their algorithm is therefore a strong candidate for use in an undergraduate course, especially if students are more familiar with time-evolution algorithms than with relaxation algorithms (and if teaching relaxation algorithms is not a course goal).\n\nFaster imaginary-time algorithms also exist, but may be too sophisticated for many educational settings. Simply switching to a centered-difference approximation for the time derivative, which is quite effective for the actual time-dependent Schrödinger equation,(25) yields an algorithm that is unstable no matter how small the time step.(26) Implicit algorithms(27) would solve the stability problem, but these require working with large matrices. One reviewer of early drafts of this paper strongly recommends an imaginary-time adaptation of the split-operator algorithm described in Sec. 16.6 of Ref. (2), which uses a fast Fourier transform to switch back and forth between position space and momentum space during each time step.\n\nViii Classroom use\n\nStudents in a computational physics course should be able to code the variational-relaxation algorithm themselves, perhaps after practicing on the ordinary relaxation algorithm for Poisson’s or Laplace’s equation. Coding the algorithm in just one spatial dimension can also be a good warm-up exercise, keeping in mind that it is usually easier to solve one-dimensional problems by the shooting method.\n\nIn an upper-division undergraduate course in quantum mechanics, it may be better to provide students with the basic code shown in Fig. 1 (or its equivalent in whatever programming language they will use). Typing the code into the computer gives students a chance to think about each computational step. After verifying that the algorithm works for a familiar example such as the two-dimensional harmonic oscillator, students can be asked to modify it to handle other potential energy functions, over-relaxation, and low-lying excited states.\n\nEven in a lower-division “modern physics” course or the equivalent, I think there is some benefit in showing students that the two-dimensional time-independent Schrödinger equation, for an arbitrary potential energy function, can be solved. For the benefit of students and others who are not ready to code the algorithm themselves, and for anyone who wishes to quickly explore some nontrivial two-dimensional stationary states, the electronic supplement(28) to this paper provides a JavaScript implementation of the algorithm with a graphical user interface, runnable in any modern web browser.\n\nIn any of these settings, and in any other physics course, introducing general-purpose numerical algorithms can help put the focus on the laws of physics themselves, avoiding an over-emphasis on idealized problems and specialized analytical tricks.\n\nAcknowledgements.\nI am grateful to Nicholas Giordano, Hisao Nakanishi, and Saul Teukolsky for helpful correspondence, to Colin Inglefield for bringing Ref. (7) to my attention, to the anonymous reviewers for many constructive suggestions, and to Weber State University for providing a sabbatical leave that facilitated this work.\n\nReferences\n\n1. N. J. Giordano and H. Nakanishi, Computational Physics, 2nd ed. (Pearson Prentice Hall, Upper Saddle River, NJ, 2006), Sec. 10.2.\n2. H. Gould, J. Tobochnik, and W. Christian, An Introduction to Computer Simulation Methods, 3rd ed. (Pearson, San Francisco, 2007), http://www.compadre.org/osp/items/detail.cfm?ID=7375, Sec. 16.3.\n3. M. Newman, Computational Physics, revised edition (CreateSpace, Seattle, 2013), Sec. 8.6.\n4. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes, 3rd ed. (Cambridge University Press, Cambridge, 2007), Sec. 18.1.\n5. When the potential energy function can be written as a sum of one-dimensional potentials, e.g., , the time-independent Schrödinger equation can be solved by separation of variables, reducing its solution to the one-dimensional case. However, a generic multidimensional potential does not have this property.\n6. F. Marsiglio, “The harmonic oscillator in quantum mechanics: A third way,” Am. J. Phys. 77(3), 253–258 (2009); R. L. Pavelich and F. Marsiglio, “Calculation of 2D electronic band structure using matrix mechanics,” Am. J. Phys. 84(12), 924–935 (2016).\n7. P. Harrison and A. Valavanis, Quantum Wells, Wires and Dots, 4th ed. (John Wiley & Sons, Chichester, UK, 2016).\n8. R. Schmied, Introduction to Computational Quantum Mechanics (unpublished lecture notes, 2016), http://atom.physik.unibas.ch/teaching/CompQM.pdf.\n9. See Giordano and Nakanishi, Ref. (1), Sec. 5.2; Gould, Tobochnik, and Christian, Ref. (2), Sec. 10.5; Newman, Ref. (3), Secs. 9.1–9.2; and Press, et al., Ref. (4), Sec. 20.5.\n10. S. E. Koonin and D. C. Meredith, Computational Physics: Fortran Version (Addison-Wesley, Reading, MA, 1990), Sec. 6.2.\n11. A. L. Garcia, Numerical Methods for Physics, 2nd ed., revised (CreateSpace, Seattle, 2015), Sec. 8.1.\n12. The variational method is discussed in most quantum mechanics textbooks. Especially thorough treatments can be found in C. Cohen-Tannoudji, B. Diu, and F. Laloë, Quantum Mechanics (John Wiley & Sons, New York, 1977), Vol. II, Complement E; E. Merzbacher, Quantum Mechanics, 3rd ed. (John Wiley & Sons, New York, 1998), Sec. 8.1; and R. Shankar, Principles of Quantum Mechanics, 2nd ed. (Springer, New York, 1994), Sec. 16.1. Each of these texts shows more generally that every discrete stationary state is an extremum of the energy functional .\n13. The claim that is much larger than the other terms in the numerator of Eq. (17) is valid if is always positive. The potential energy can always be shifted so that this condition holds.\n14. The trivial function is also a solution of Eq. (14), but is unstable under small perturbations so the algorithm will never converge to this solution.\n15. Wolfram Mathematica, http://www.wolfram.com/mathematica/.\n17. When the two-particle potential consists only of an interaction term of the form , the Schrödinger equation is separable in terms of center-of-mass and relative coordinates. Adding a confining potential, such as the infinite square well used here, makes the problem irreducibly two-dimensional.\n18. For simulation results of scattering interactions between two particles in one dimension interacting via a Gaussian potential, see J. J. V. Maestri, R. H. Landau, and M. J. Páez, “Two-particle Schrödinger equation animations of wave packet-wave packet scattering,” Am. J. Phys. 68(12), 1113–1119 (2000).\n19. For other approaches to analyzing quantum dots, see Z. M. Schultz and J. M. Essick, “Investigation of exciton ground state in quantum dots via Hamiltonian diagonalization method,” Am. J. Phys. 76(3), 241–249 (2008); B. J. Riel, “An introduction to self-assembled quantum dots,” Am. J. Phys. 76(8), 750–757 (2008); D. Zhou and A. Lorke, “Wave functions of elliptical quantum dots in a magnetic field,” Am. J. Phys. 83(3), 205–209 (2015).\n20. Section 18.3 of Numerical Recipes, Ref. (4), presents a general relaxation algorithm for systems of ordinary differential equations, and Sec. 8.0.1 shows how to treat eigenvalue problems by adding another equation to the system. For the special case of the one-dimensional time-independent Schrödinger equation the approach taken in this paper is much simpler. Reference (4) does not discuss eigenvalue problems for partial differential equations.\n21. See, for example, W. F. Ames, Numerical Methods for Partial Differential Equations, 2nd ed. (Academic Press, New York, 1977). This standard reference includes a section on eigenvalue problems for partial differential equations, but does not describe any algorithm that I can recognize as equivalent to the one in this paper.\n22. Giordano and Nakanishi, Ref. (1), Sec. 10.4.\n23. Koonin and Meredith, Ref. (10), Sec. 7.4.\n24. We can interpret the imaginary time parameter as an inverse temperature, and the exponential factor as a Boltzmann probability factor in the canonical ensemble. Then the limit corresponds to lowering the reservoir temperature to zero, putting the system into its ground state.\n25. P. B. Visscher, “A fast explicit algorithm for the time-dependent Schrödinger equation,” Computers in Physics 5 (6), 596–598 (1991). This algorithm is also described in Ref. (1), Sec. 10.5, and Ref. (2), Sec. 16.5; I learned it from T. A. Moore in 1982.\n26. See Ames, Ref. (21), Sec. 2-4. The argument given there is easily adapted to Eq. (22).\n27. See Koonin and Meredith, Ref. (10), Secs. 7.2–7.3; Garcia, Ref. (11), Chap. 9; or Press et al., Ref. (4), Secs. 20.2–20.3.\n28. See the “Quantum Bound States in Two Dimensions” web app at http://physics.weber.edu/schroeder/software/BoundStates2D.html.\nYou are adding the first comment!\nHow to quickly get a good reply:\n• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.\n• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.\n• Your comment should inspire ideas to flow and help the author improves the paper.\n\nThe better we are at sharing our knowledge with each other, the faster we move forward.\nThe feedback must be of minimum 40 characters and the title a minimum of 5 characters", null, "", null, "", null, "" ]
[ null, "https://www.groundai.com/media/arxiv_projects/182487/x1.png", null, "https://www.groundai.com/media/arxiv_projects/182487/x2.png", null, "https://www.groundai.com/media/arxiv_projects/182487/Fig3.jpg", null, "https://www.groundai.com/media/arxiv_projects/182487/Fig4.jpg", null, "https://dp938rsb7d6cr.cloudfront.net/static/1.65/groundai/img/loader_30.gif", null, "https://dp938rsb7d6cr.cloudfront.net/static/1.65/groundai/img/comment_icon.svg", null, "https://dp938rsb7d6cr.cloudfront.net/static/1.65/groundai/img/about/placeholder.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90276486,"math_prob":0.97487944,"size":28001,"snap":"2019-43-2019-47","text_gpt3_token_len":6146,"char_repetition_ratio":0.14380112,"word_repetition_ratio":0.016606823,"special_character_ratio":0.22081354,"punctuation_ratio":0.13498779,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9923702,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T19:15:54Z\",\"WARC-Record-ID\":\"<urn:uuid:d89d5495-a67a-46d9-a6dc-398c1e60a035>\",\"Content-Length\":\"423721\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c572f3d-ec05-42e0-b027-154636c27512>\",\"WARC-Concurrent-To\":\"<urn:uuid:de6660a3-a6ff-4fe9-b06e-432f5e56502e>\",\"WARC-IP-Address\":\"35.186.203.76\",\"WARC-Target-URI\":\"https://www.groundai.com/project/the-variational-relaxation-algorithm-for-finding-quantum-bound-states/\",\"WARC-Payload-Digest\":\"sha1:PGTE2KU5LRZT7H77W3FTPTQIISP2RVKR\",\"WARC-Block-Digest\":\"sha1:J2DLJGTAYYVWJ7EXG3WVZZLV7FIXMJJV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986684425.36_warc_CC-MAIN-20191018181458-20191018204958-00405.warc.gz\"}"}
https://www.physicsforums.com/threads/millikan-oil-drop-experiment-charge-determination.889089/
[ "# Millikan oil drop experiment charge determination\n\n• Jacob Pilawa\n\n#### Jacob Pilawa\n\nHowdy y'all!\n\nIf you could help with the following question, my physics class and I would be extremely grateful.\n\nA charged oil droplet is suspended motionless between two parallel plates (d=0.01m) that are held at a potential difference V. Periodically, the charge on the droplet changes as in the original oil drop experiment. Each time the charge changes, V is adjusted so that the droplet remains motionless. Here is a table of recorded values of the voltage:\n\ni. 350 V\n\nii. 408.3 V\n\niii. 490 V\n\niv. 612.5 V\n\nFrom the data above, determine the charge on the dorplet for case (i) above. What assumptions do you need to make? (Hint: the ratio of voltages = ?)\n\nThanks a ton, we've been stumped.\n\nI'm going to be honest here, me and 2 friends have been working on this for about 4 hours, and we don't really have any substantial work to show. Any help would be great. Thanks.\n\nI can give you a hint, but I haven't solved it myself yet: ## 350* \\, Q_1=408.3* \\, Q_2=490* \\, Q_3=612.5* \\, Q_4 ##. ## Q_4<Q_3<Q_2<Q_1 ##. Find some ## Q_o ## so that ## Q_4=n_4 \\, Q_o ##, ## Q_3=n_3 \\, Q_o ##, etc., ## n_4, n_3,... ## integers (hopefully small ones). Sorry, I edited a couple of times because I read it incorrectly.\n\nLast edited:\nI can give you a hint, but I haven't solved it myself yet: ## 350 \\, Q_1=408.3 \\, Q2=490 \\, Q_3=612.5 Q_4 ##. ## Q_4<Q_3<Q_2<Q_1 ##. Find some ## Q_o ## so that ## Q_4=n_4 \\, Q_o ##, ## Q_3=n_3 \\, Q_o ##, etc., ## n_4, n_3,... ## integers (hopefully small ones).\n\nOkay, this makes sense. However, where can we go from here? Is there anyway to solve for the integers?\n\nI have it, but I'm not allowed to give the solution. I can give you a hint though. The smallest number, ## Q_4 ## is greater than 3. Another hint is the numbers are exact enough, that I think the data is probably simply constructed by the professor as a good learning exercise. One additional hint=let ## Q_4=n_4 ## (Ignore the ## Q_o ## part mentioned previously.) Please let us know if you figured out the answer.\n\nLast edited:\nI have it, but I'm not allowed to give the solution. I can give you a hint though. The smallest number, ## Q_4 ## is greater than 3.\n\nOkay, so we talked it out a little bit. So does this mean that the answer is 7e=1.12x10^-18 coulumbs?\n\nOkay, so we talked it out a little bit. So does this mean that the answer is 7e=1.12x10^-18 coulumbs?\nYes. One additional question for you=what did you get for the other 3 integers? And were the calculations almost exact?\n\nYes. One additional question for you=what did you get for the other 3 integers?\n\nWe got all the integers as 7,6,5, and 4. Thank you so much! We just screamed in excitement and relief.\n\n•", null, "" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9539072,"math_prob":0.55436444,"size":1737,"snap":"2023-14-2023-23","text_gpt3_token_len":450,"char_repetition_ratio":0.10155799,"word_repetition_ratio":0.9589905,"special_character_ratio":0.25561312,"punctuation_ratio":0.1462766,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9898583,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T10:10:51Z\",\"WARC-Record-ID\":\"<urn:uuid:38decdb1-717f-4101-be07-17891fc66da7>\",\"Content-Length\":\"86928\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8c0d7e2c-e1c8-4982-9228-bf98b387f1fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e817754-c0a9-4f9d-8633-ca525f82abe8>\",\"WARC-IP-Address\":\"104.26.14.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/millikan-oil-drop-experiment-charge-determination.889089/\",\"WARC-Payload-Digest\":\"sha1:VPIYUSANZL4K7N4YED2UJ7FXXNVJ62UH\",\"WARC-Block-Digest\":\"sha1:A6PFGIH7RM67MVDQK442TZLIUTZPEP4J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224651815.80_warc_CC-MAIN-20230605085657-20230605115657-00530.warc.gz\"}"}
https://testbook.com/question-answer/the-salary-of-anup-is-30-more-than-that-of-barun--607825e623916f0e8b1e3310
[ "# The salary of Anup is 30% more than that of Barun. Find what percentage is the salary of Barun less than that of Anup?\n\nThis question was previously asked in\nWBCS Prelims 2016 Official Paper\nView all WBCS Papers >\n1. 26.12%\n2. 21.23%\n3. 23.07%\n4. 27.03%\n\nOption 3 : 23.07%\n\n## Detailed Solution\n\nCalculation:\n\nLet the salary of Barun be 100\n\nThen, the salary of Anup = 100 × (100 + 30)/100 = 130\n\nDifference = 130 - 100 = 30\n\nThe required percentage = 30/130 × 100 = 300/13 = 23.07%\n\n∴ The required percentage is 23.07%" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74477863,"math_prob":0.9761811,"size":218,"snap":"2021-43-2021-49","text_gpt3_token_len":79,"char_repetition_ratio":0.13084112,"word_repetition_ratio":0.0,"special_character_ratio":0.51376146,"punctuation_ratio":0.0952381,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98142403,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-05T17:42:14Z\",\"WARC-Record-ID\":\"<urn:uuid:837bb12c-72f5-42e0-ade9-b08a33022003>\",\"Content-Length\":\"115968\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d71499a2-2374-48dc-b0f7-c54cf7b7cd93>\",\"WARC-Concurrent-To\":\"<urn:uuid:96879a53-fb3e-46fd-860b-2415a17fcffa>\",\"WARC-IP-Address\":\"104.22.44.238\",\"WARC-Target-URI\":\"https://testbook.com/question-answer/the-salary-of-anup-is-30-more-than-that-of-barun--607825e623916f0e8b1e3310\",\"WARC-Payload-Digest\":\"sha1:MYFPWJJC3A7DF6RMFYQC7CV7WL5HILKU\",\"WARC-Block-Digest\":\"sha1:VXD54LCEY46Q4VVWKUME7JLY6MKUTYBN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363215.8_warc_CC-MAIN-20211205160950-20211205190950-00330.warc.gz\"}"}
https://blog.golayer.io/finance/fixed-cost
[ "There are many costs associated with running a business, and if you want that business to stay alive, you need to keep track of all of them. You also need to be aware of the differences between fixed and variable costs, as they play different roles in your company’s finances. While there may be some wiggle room regarding lowering certain variable costs - for example, you may find vendors that charge less for the raw materials - fixed costs tend to be more difficult to decrease. The amount you pay for rent, for example, is unlikely to go anywhere but up.\n\nIn this article, you will learn about fixed costs and how to find fixed costs from total costs and total variable costs. You will also learn how to calculate the fixed cost per unit and the average fixed cost. Additionally, you will learn how to use Google Sheets to calculate the fixed cost per unit and average fixed costs, as well as how to forecast total fixed costs using the fixed cost per unit.\n\n## What Is Fixed Cost?\n\nFixed costs are those not directly associated with production. They are sometimes referred to as indirect or overhead costs. Unlike variable costs, they remain fixed regardless of production level. Common fixed costs include rent, licenses, and permits. You may not need them directly for production, but your business can’t run without paying for these fixed costs.\n\nSalaries are a good example of a cost that is frequently considered fixed. However, in some cases, parts of the salary may be variable costs - like bonuses or commissions - which are affected by increased or decreased production. In these cases, the total amount paid out as salaries can be considered a semi-fixed or semi-variable cost, as it has both components. It’s important to identify these costs and divide them into fixed and variable components.\n\nWhether you produce thousands of units of your product or none at all, fixed costs won’t change. This makes the total fixed cost and the average fixed cost important metrics to consider when evaluating a company. Fixed costs have to be covered even when production and sales are at a minimum, so you need to keep a close eye on them.\n\n#### Break-Even Analysis: How to Calculate Break-Even Point\n\nA break-even analysis is a financial calculation used to determine a company’s break-even point. Here’s how to calculate the break-even point step-by-step.", null, "## How Do You Calculate Fixed Cost?\n\nTotal fixed costs can be calculated by adding up all fixed costs for a given period. They can also be calculated by subtracting variable costs from total costs. The fixed cost formula looks like this:\n\nFixed costs = Total costs - Variable costs\n\nDepending on the number of products made by your company, you will need to calculate a few other metrics associated with fixed costs. For example, you’ll need to calculate the total fixed costs for a given period to find the average fixed costs.\n\n### How To Find Total Fixed Cost?\n\nThere are different methods that can be used to calculate total fixed costs. The first is to go through your records, making a note of all costs not directly related to making your product and adding them up.\n\nIf you already know the total variable costs for a given period, you can subtract them from the total costs for that period to find the total fixed costs. The total variable costs can be calculated by multiplying the variable cost per unit by the total number of units made.\n\nTotal fixed costs = Total costs - (Variable cost per unit * Total number of units)\n\n### How To Find Average Fixed Cost?\n\nWhile fixed costs are not directly related to production, for accounting purposes, it’s useful to assign a proportion of these costs to each unit of output: the average fixed cost. Use the average fixed cost formula to calculate it for a given period.\n\nAverage fixed costs = Total fixed costs / Total number of units made\n\nSince these calculations need to be repeated periodically, it makes sense to set up a template in Google Sheets or Excel to speed up the process. The next section includes examples that have been done using Google Sheets. If you work in a team and have cost data spread out over different files, you can use Layer to quickly connect your data and set up flows to automatically update your calculations.\n\n## Fixed Costs Examples\n\nCompany X’s accounting team is working on fixed costs. They want to calculate their total fixed costs for break-even analysis, but they also want to know their average fixed costs. For this example, calculations will be based on the previous month’s costs.\n\n### Step 1: Identify All Fixed Costs\n\nEither of the methods mentioned in the previous sections can be used to find fixed costs. Company X can go through the expense records adding up all the costs that aren’t directly related to production. However, if they know their total costs and their total variable costs, they can use that instead.\n\n1. 1. In Sheets, type in the values you need: total costs and variable costs for the month.", null, "How To Find Fixed Cost (Complete Guide) - Month’s Production Data\n1. 2. You can calculate total variable costs by multiplying the variable cost per unit by the total number of units made that month.", null, "How To Find Fixed Cost (Complete Guide) - Calculate Total Variable Costs\n1. 3. The total variable cost for the month is \\$3,260.", null, "How To Find Fixed Cost (Complete Guide) - Total Variable Costs", null, "What Is Budgeting and Why Is It Important for a Business?\n\nDiscover what budgeting is, the goals and questions your budget needs to answer, and why creating a budget is essential for any business.\n\n### Step 2: Calculate Total Fixed Costs\n\nNow, you just need to apply the fixed cost formula.\n\n1. 1. Subtract the total variable costs from the total costs.", null, "How To Find Fixed Cost (Complete Guide) - Calculate Total Fixed Costs\n1. 2. The total fixed cost for the month is \\$2,240.", null, "How To Find Fixed Cost (Complete Guide) - Total Fixed Costs\n\n### Step 3: Calculate Average Fixed Costs\n\nNow that you know the total fixed cost, calculating the average fixed cost is easy.\n\n1. 1. Divide the amount for total fixed costs by the total number of units made of both products.", null, "How To Find Fixed Cost (Complete Guide) - Calculate Average Fixed Costs\n1. 2. That’s it. The average fixed cost for Company X is \\$2.24.", null, "How To Find Fixed Cost (Complete Guide) - Average Fixed Costs\n\n## Conclusion\n\nKeeping accurate and detailed records of all costs is an important part of running a profitable business. These costs need to be categorized as fixed or variable in order to carry out further financial analysis, like profitability and break-even analyses. A tool like Layer allows you to seamlessly connect your data across multiple files and formats, automatically updating your calculations.\n\nYou now know about fixed costs and how to find them from total costs or by identifying all fixed costs and adding them up. You also know how to use Google Sheets to calculate total fixed cost, fixed cost per unit, and average fixed cost." ]
[ null, "https://blog.golayer.io/uploads/images/special-offer/_w292h215/Break-Even-Analysis-How-to-Calculate-Break-Even-Point.png", null, "https://blog.golayer.io/uploads/images/builder/image-blocks/_w916h515/How-To-Find-Fixed-Cost-Complete-Guide-Month’s-Production-Data.png", null, "https://blog.golayer.io/uploads/images/builder/image-blocks/_w916h515/How-To-Find-Fixed-Cost-Complete-Guide-Calculate-Total-Variable-Costs.png", null, "https://blog.golayer.io/uploads/images/builder/image-blocks/_w916h515/How-To-Find-Fixed-Cost-Complete-Guide-Total-Variable-Costs.png", null, "https://blog.golayer.io/uploads/images/special-offer/_w208h153/What-Is-Budgeting-and-Why-Is-It-Important-for-a-Business.png", null, "https://blog.golayer.io/uploads/images/builder/image-blocks/_w916h515/How-To-Find-Fixed-Cost-Complete-Guide-Calculate-Total-Fixed-Costs.png", null, "https://blog.golayer.io/uploads/images/builder/image-blocks/_w916h515/How-To-Find-Fixed-Cost-Complete-Guide-Total-Fixed-Costs.png", null, "https://blog.golayer.io/uploads/images/builder/image-blocks/_w916h515/How-To-Find-Fixed-Cost-Complete-Guide-Calculate-Average-Fixed-Costs.png", null, "https://blog.golayer.io/uploads/images/builder/image-blocks/_w916h515/How-To-Find-Fixed-Cost-Complete-Guide-Average-Fixed-Costs.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93475467,"math_prob":0.8824304,"size":6511,"snap":"2023-40-2023-50","text_gpt3_token_len":1309,"char_repetition_ratio":0.20362686,"word_repetition_ratio":0.04428698,"special_character_ratio":0.20242666,"punctuation_ratio":0.09429477,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9945792,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,6,null,2,null,2,null,2,null,3,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T07:14:05Z\",\"WARC-Record-ID\":\"<urn:uuid:47d2f5d1-fdc7-4099-a7f8-b2cbf31826e2>\",\"Content-Length\":\"63431\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:84dec459-58e6-40ce-9042-8d1f621f96a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:b099aed5-1962-4467-a938-d05494f8dd75>\",\"WARC-IP-Address\":\"104.26.3.81\",\"WARC-Target-URI\":\"https://blog.golayer.io/finance/fixed-cost\",\"WARC-Payload-Digest\":\"sha1:A6BG7RL72TNSNPPLFTFKEYWU2MABTWFA\",\"WARC-Block-Digest\":\"sha1:GUJXCDQBD3Q7OAOQRXNERM2BD53POLZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510368.33_warc_CC-MAIN-20230928063033-20230928093033-00859.warc.gz\"}"}
https://invernessgangshow.net/23490th-undergoes-beta-decay-what-is-the-atomic-number-of-the-resulting-element/
[ "end up being familiar through A/Z styles for alpha, beta, positron, neutron, gamma, and the three isotopes of hydrogen. Refer the transforms in the atom number and also mass number of a radiation nuclei once particle or beam is emitted. Transform symbol-mass layout of aspects to A/Z format and also then balance a atom reaction. Generate a decay reaction if provided with a symbol-mass layout of one element.\n\nYou are watching: 23490th undergoes beta decay. what is the atomic number of the resulting element?\n\nMany nuclei room radioactive; the is, they decompose by emitting particles or rays and in law so, become a different nucleus. In our research studies up to this point, atoms of one facet were unable to readjust into various elements. That is because in every other types of transforms we have talked about only the electrons to be changing. In this changes, the nucleus, which includes the protons which dictate which aspect an atom is, is changing. All nuclei v 84 or an ext protons space radioactive and elements with less than 84 protons have actually both stable and also unstable isotopes. Every one of these elements can go with nuclear changes and also turn into various elements.\n\nIn herbal radioactive decay, three common emissions occur. As soon as these emissions were initially observed, scientists were unable to identify them as some already known particles and so called them\n\nalpha particles ((alpha )), beta particles, (left( \beta ight)), and gamma light ray (left( gamma ight))\n\nusing the very first three letter of the Greek alphabet. Some later time, alpha particles were established as helium-4 nuclei, beta particles were figured out as electrons, and also gamma rays as a kind of electromagnetic radiation favor x-rays other than much greater in energy and even an ext dangerous to living systems.", null, "Figure (PageIndex1): return many species are encountered in atom reactions, this table summarizes the names, symbols, representations, and also descriptions of the most usual of these. Please go this chart to memory in addition to A/Z styles for the three Hydrogen isotope H-2 = deuterium (d) and H-3 = tritium (t)\n\nFrequently, gamma ray production accompanies nuclear reactions of all types. In the alpha decay of (ceU)-238, two gamma beam of different energies space emitted in enhancement to the alpha particle.\n\nVirtually every one of the nuclear reactions in this chapter also emit gamma rays, but for simplicity the gamma light ray are generally not shown. Nuclear reactions develop a great deal more energy than invernessgangshow.netical reactions. invernessgangshow.netistry reactions release the difference in between the invernessgangshow.netical bond energy of the reactants and products, and also the energies released have actually an order of size of (1 imes 10^3 : extkJ/mol). Nuclear reactions release several of the binding energy and also may convert tiny quantities of matter right into energy. The power released in a nuclear reaction has actually an bespeak of size of (1 imes 10^18 : extkJ/mol). That means that nuclear alters involve virtually a million times much more energy per atom than invernessgangshow.netical changes!\n\nCommon gamma emitters would include I-131, Cs-137, Co-60, and Tc-99.\n\nVirtually all of the atom reactions in this chapter likewise emit gamma rays, but for simplicity the gamma beam are normally not shown.\n\n##", null, "Figure (PageIndex2): three most typical modes of nuclear decay\n\nExample (PageIndex1)\n\nComplete the complying with nuclear reaction by filling in the lacking particle.\n\nSolution\n\nThis reaction is one alpha decay. We have the right to solve this trouble one of two ways:\n\nSolution 1: when an atom offers off an alpha particle, its atom number fall by 2 and also its massive number fall by 4 leaving: (ce_84^206Po). We know the symbol is (cePo), because that polonium, due to the fact that this is the aspect with 84 protons on the periodic table.\n\nSolution 2: Remember the the mass numbers on every side must total up to the same amount. The exact same is true the the atomic numbers.\n\nSee more: What Was The Dog From Smokey And The Bandit '? Smokey And The Bandit (1977)\n\nfixed numbers: (210 = 4 + ?) atom numbers: (86 = 2 + ?)\n\nWe room left with (ce_84^206Po)\n\n## Decay Series\n\nThe decay of a radioactive nucleus is a move toward ending up being stable. Often, a radiation nucleus cannot with a secure state through a single decay. In such cases, a collection of decays will happen until a secure nucleus is formed. The degeneration of (ceU)-238 is an example of this. The (ceU)-238 decay collection starts with (ceU)-238 and also goes v fourteen separate decays to finally reach a secure nucleus, (cePb)-206 (Figure 17.3.3). Over there are similar decay series for (ceU)-235 and (ceTh)-232. The (ceU)-235 series ends with (cePb)-207 and also the (ceTh)-232 series ends with (cePb)-208." ]
[ null, "https://invernessgangshow.net/23490th-undergoes-beta-decay-what-is-the-atomic-number-of-the-resulting-element/imager_1_8375_700.jpg", null, "https://invernessgangshow.net/23490th-undergoes-beta-decay-what-is-the-atomic-number-of-the-resulting-element/imager_2_8375_700.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92217916,"math_prob":0.9317889,"size":4399,"snap":"2022-27-2022-33","text_gpt3_token_len":984,"char_repetition_ratio":0.11990899,"word_repetition_ratio":0.014084507,"special_character_ratio":0.22232325,"punctuation_ratio":0.10278114,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9504364,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T15:08:29Z\",\"WARC-Record-ID\":\"<urn:uuid:20242581-13a5-4111-b266-98c20ee42ab7>\",\"Content-Length\":\"15353\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:82bac5d2-7ed7-4f80-a0f7-2d8156f9dc23>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f2ed8ae-780c-443b-a233-f321c657625b>\",\"WARC-IP-Address\":\"104.21.92.103\",\"WARC-Target-URI\":\"https://invernessgangshow.net/23490th-undergoes-beta-decay-what-is-the-atomic-number-of-the-resulting-element/\",\"WARC-Payload-Digest\":\"sha1:N6LPJM7GKLQOYIKGZC6J5HK5AXOUFAXG\",\"WARC-Block-Digest\":\"sha1:LNEMG5UQCTQC27WI5DW4RIEW6QIZLIOL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103640328.37_warc_CC-MAIN-20220629150145-20220629180145-00574.warc.gz\"}"}
http://forums.wolfram.com/mathgroup/archive/2004/Aug/msg00202.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "how to graphically fill the tails of a normal distribution\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg50012] how to graphically fill the tails of a normal distribution\n• From: Todd Allen <genesplicer28 at yahoo.com>\n• Date: Wed, 11 Aug 2004 05:53:11 -0400 (EDT)\n• Sender: owner-wri-mathgroup at wolfram.com\n\n```Hello all,\n\nHope you don't mind a question from a biologist trying to learn Mathematica. I have been teaching myself the syntax of Mathematica to analyze microarray data (i.e, a technology to simultaneously monitor the expression levels of thousands of genes). I am interested in producing a plot for a presentation in which the tails of a normal distribution are colored red to illustrate a concept. I can get the plot to work with a single tail, but I can't get the syntax to work for both tails of the disribution. Here is my code:\n\nThis works for a single tail, with the following code (necessary packages already loaded):\nndist=NormalDistribution[0,1]\n\nnpdf=PDF[ndist,x]\n\nFilledPlot[{npdf,If[x³2,0,npdf]},{x,-4,4},Fills®RGBColor[1,0,0]]\n\nHowever when I try the following with the Or operator (at least I think Or operator):\n\nFilledPlot[{npdf,If[x³2.0þx£-2.0,0,npdf]},{x,-4,4},Fills®RGBColor[1,0,0]]\n\nI get a series of error messages:\n\nPlot:plnr:If[x³2.0þx£-2.0,0,npdf] is not a machine-size real number at x=-4\n\nThis error repeats itself with different x values and then a normal curve is drawn with neither tail colored in.\n\nWhat am I missing here?\n\nAny insight is much appreciated, thanks!\nTodd\n\n```\n\n• Prev by Date: Re: Problem with order of evaluation\n• Next by Date: Re: Re: Re: Reduce/Solve\n• Previous by thread: Re: x-ArcSin[Sin[x]]\n• Next by thread: Re: how to graphically fill the tails of a normal distribution" ]
[ null, "http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "http://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/4.gif", null, "http://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8315304,"math_prob":0.59258956,"size":1787,"snap":"2020-24-2020-29","text_gpt3_token_len":502,"char_repetition_ratio":0.13516545,"word_repetition_ratio":0.13857678,"special_character_ratio":0.25909346,"punctuation_ratio":0.17098446,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96505374,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-03T01:30:10Z\",\"WARC-Record-ID\":\"<urn:uuid:fd8b045c-d22b-46d8-9f19-3a0ca43aab67>\",\"Content-Length\":\"45301\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b54c7173-bf84-4b42-83c5-19f75a2e72fd>\",\"WARC-Concurrent-To\":\"<urn:uuid:d25d1413-f5fc-4ace-8706-170199f59b4a>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2004/Aug/msg00202.html\",\"WARC-Payload-Digest\":\"sha1:AOI32JAKGVWPYMFRJA5F5QEA2LOKIULO\",\"WARC-Block-Digest\":\"sha1:DGN4QNHCAV3IZNUEQXFB24TVD5N3LVOJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347426956.82_warc_CC-MAIN-20200602224517-20200603014517-00109.warc.gz\"}"}
https://socratic.org/questions/how-do-you-find-the-intercepts-for-2x-y-4
[ "# How do you find the intercepts for 2x-y=4?\n\nJun 24, 2015\n\nx-intercept is 2\ny-intercept is -4\n\n#### Explanation:\n\nthe x-intercept is the value of $x$ where the line of the equation crosses the x-axis; on the x-axis, $y = 0$\nso substituting $y = 0$ into $2 x - y = 4$\ngives\n$2 x - 0 = 4$ or $x = 2$\n\nSimilarly, the y-intercept is the value of $y$ where the line of the equation crosses the y-axis, $x = 0$\nso substituting $x = 0$ into $2 x - y = 4$\ngives\n$2 \\left(0\\right) - y = 4$ or $y = - 4$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.666599,"math_prob":1.0000063,"size":360,"snap":"2021-43-2021-49","text_gpt3_token_len":102,"char_repetition_ratio":0.16292135,"word_repetition_ratio":0.10909091,"special_character_ratio":0.2361111,"punctuation_ratio":0.057971016,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000092,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T03:29:31Z\",\"WARC-Record-ID\":\"<urn:uuid:7ce634cb-df3a-473b-be8a-427ba3e530d6>\",\"Content-Length\":\"33031\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a586ed1-0644-4d88-b8c8-bf15a5e440ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:16ee75b4-95a1-4860-b27c-b27c4b11a056>\",\"WARC-IP-Address\":\"216.239.32.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-find-the-intercepts-for-2x-y-4\",\"WARC-Payload-Digest\":\"sha1:ILVGH4I3FTICYXV2SW7HX6GZD4ZE2STA\",\"WARC-Block-Digest\":\"sha1:M4A35BG4FG3GCO2FYYX3RW5IQNKJPRH6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359082.78_warc_CC-MAIN-20211201022332-20211201052332-00265.warc.gz\"}"}
https://www.jiskha.com/questions/1074380/2-find-two-consecutive-prime-numbers-less-than-40-such-that-the-sum-of-their-opposites
[ "# math\n\n2. Find two consecutive prime numbers less than 40 such that the sum of their opposites -5.\n3. Insert plus (+) or minus (-) on the digits 1 to 9 asshown to obtain 100. 1 2 3 4 5 6 7 8 9. How many can you form?\n\n1. 👍 0\n2. 👎 0\n3. 👁 96\n1. what is the aswer for question number 2?\n\n1. 👍 0\n2. 👎 0\n\n## Similar Questions\n\n1. ### Math\n\n1) The numbers 2 and 3 are prime numbers. They are also consecutive numbers. Are there other pairs of primes that are consecutive numbers? Why or why not? 2) which group of numbers, evens or odds, includes more prime numbers? Why?\n\nasked by Joe on October 4, 2017\n2. ### Math\n\nThe sum of 3 consecutive odd natural numbers is 69. Find the prime number out of these numbers.\n\nasked by Srishti on May 29, 2016\n3. ### Math\n\nAre there 11 consecutive positive whole numbers whose sum is prime? If yes, find all of them; if no, prove it!\n\nasked by Pim on October 15, 2009\n4. ### Math\n\n1) for any three consecutive numbers, what can you say about odd numbers and even numbers? Explain. 2 A)Mirari conjectures that, for any three consecutive numbers, one number would be divisible by 3. Do you think Mirari is\n\nasked by Ali on October 9, 2017\n5. ### MATH\n\nThe sum of three whole numbers is 193. The smaller two are consecutive integers and the larger two are consecutive even whole numbers.Find the three whole numbers.\n\nasked by YRIELLE RHAYNE on January 18, 2012\n1. ### Mathematics\n\nThe numbers 3-5-7 are three consecutive odds each being prime. Find the next example of a \"triple of prime \".\n\nasked by Abc on September 29, 2018\n2. ### Mathematics\n\nWhat are consecutive numbers? -The sum of TWO consecutive whole numbers is 99. What are they? -The sum of THREE consecutive whole numbers is 99. What are they? Can someone please show me if there is working out? (if needed)\n\nasked by Crissy XD on April 30, 2011\n3. ### Math\n\n1: Goldbach's Conjecture is a famous conjecture that has never been proven true or false. The conjecture states that every even number, except 2, can be written as the sum of two prime numbers. For example, 16 can be written as\n\nasked by Jordan on October 9, 2017" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95824885,"math_prob":0.97981954,"size":2372,"snap":"2020-10-2020-16","text_gpt3_token_len":686,"char_repetition_ratio":0.1891892,"word_repetition_ratio":0.04109589,"special_character_ratio":0.2837268,"punctuation_ratio":0.11530815,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9942385,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-02T15:03:33Z\",\"WARC-Record-ID\":\"<urn:uuid:b30e2058-7dac-4561-8dcb-ecff51ef85ee>\",\"Content-Length\":\"20896\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:863df444-162a-48a9-ade5-c52299c93dbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e50af16-f614-404c-85eb-527d773e20c1>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/1074380/2-find-two-consecutive-prime-numbers-less-than-40-such-that-the-sum-of-their-opposites\",\"WARC-Payload-Digest\":\"sha1:V46GKCJHBXRPCCP3OXGG546XBQ64PRMR\",\"WARC-Block-Digest\":\"sha1:ND3UZTIGH323LVHGHGRPHVLXZZEHRDBX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370506988.10_warc_CC-MAIN-20200402143006-20200402173006-00051.warc.gz\"}"}
https://cs.stackexchange.com/questions/85870/time-complexity-of-prims-algorithm
[ "Time complexity of Prim's algorithm\n\nThere is this Prim's algorithm I am studying, the time complexity of which is $O(n^2)$ (in the adjacency matrix).\n\nAs far as I have understood,that is because we have to ckeck all the nodes per every node, that is, when checking the first node's best edge, we have to check edges from the current node to all other nodes. That makes $(n-1)$ edges at most. We check this for all nodes, so it would be n*(n-1) edges to check at most.\n\nSo why is the time complexity $O(n^2)$?\n\nIn addition, why don't we consider the edges which create a loop and why don't we ommit them in the algorithm? Does that make any difference in the time complexity?\n\nThe time complexity is $O(n^2)$ because $O(n\\cdot(n-1)) = O(n^2)$\nFor example $$O(2n) = O(n)\\\\O(3n) = O(n)\\\\O(\\frac{n}{2}) = O(n)\\\\O(2n^2) = O(n^2)$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94349796,"math_prob":0.99967957,"size":635,"snap":"2019-43-2019-47","text_gpt3_token_len":160,"char_repetition_ratio":0.12836768,"word_repetition_ratio":0.0,"special_character_ratio":0.2519685,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999995,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T15:23:29Z\",\"WARC-Record-ID\":\"<urn:uuid:b3dc6479-a4ef-4ef4-affa-1968d2ab5662>\",\"Content-Length\":\"138982\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fedafd1a-7821-4974-827e-63e940e83c63>\",\"WARC-Concurrent-To\":\"<urn:uuid:f87abab3-a38d-4ed5-aed8-ad2a78539426>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/85870/time-complexity-of-prims-algorithm\",\"WARC-Payload-Digest\":\"sha1:YANEQVXWN4FHL4WU6FOSTAVO3JZMF77N\",\"WARC-Block-Digest\":\"sha1:TOUWLRHFLIW747Y42MHCGDSQIE6ON5Q6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986675409.61_warc_CC-MAIN-20191017145741-20191017173241-00436.warc.gz\"}"}
https://math.stackexchange.com/questions/2920286/convex-optimization-with-concave-constraints
[ "# Convex optimization with concave constraints\n\nI have a function\n\n$$f(x) = \\max_{1 \\leq i \\leq N} f_i(x)$$\n\nWhere the $f_i(x)$ are smooth, concave functions. Furthermore, $f(x)$ is a signed distance function, and I am only interested in the subset of points where $f(x) = 0$.\n\nGiven a point $p_h$, I want to find the point $p$ closest to $p_h$ such that $f(p) = 0$. Since the $f_i$ are concave, there are no guarantees as to $f(x)$ being convex nor concave, so I can't directly solve\n\n\\begin{equation*} \\begin{aligned} & \\underset{p}{\\text{minimize}} & & || p - p_h||_2^2 \\\\ & \\text{subject to} & & f(p) = 0, \\; i = 1, \\ldots, N. \\end{aligned} \\end{equation*}\n\nInstead, my approach is the following\n\n• If $f(p_h) < 0$, I want the point such that $f(p) \\geq 0$, which implies $f_i(p) \\geq 0$ or, more precisely, $-f_i(p) \\leq 0$; since $-f_i$ are convex, I can optimize as usual.\n\n• The problem lies when $f(p_h) > 0$. In this case, I need $f_i(p) \\leq 0$; at this point, I am lost, as the constraints are not convex.\n\nSo there are a few questions here:\n\n• Is there any way I can turn the $f_i(p) \\leq 0$ constraint into a convex constraint so that I can optimize as usual?\n• If not, is there any known transformation I can apply to turn it into a more amenable problem?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8960259,"math_prob":0.9998921,"size":1209,"snap":"2019-35-2019-39","text_gpt3_token_len":408,"char_repetition_ratio":0.110373445,"word_repetition_ratio":0.0,"special_character_ratio":0.34243175,"punctuation_ratio":0.122137405,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000069,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-19T14:42:58Z\",\"WARC-Record-ID\":\"<urn:uuid:c80dcbb6-bec5-4ddc-9d58-7111d65efa4a>\",\"Content-Length\":\"128495\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76d133b6-e96d-4f21-8489-6e1ae6d4d10d>\",\"WARC-Concurrent-To\":\"<urn:uuid:42039928-67b7-4c05-8f56-478510a93c7d>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2920286/convex-optimization-with-concave-constraints\",\"WARC-Payload-Digest\":\"sha1:MPTZJTVL4S3CXGNMKSZXOCGJRV2J6B5K\",\"WARC-Block-Digest\":\"sha1:HUOFWVR7BURORYTSVWFDNCHASBMRTOEA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573533.49_warc_CC-MAIN-20190919142838-20190919164838-00312.warc.gz\"}"}
https://www.colorhexa.com/1022d8
[ "# #1022d8 Color Information\n\nIn a RGB color space, hex #1022d8 is composed of 6.3% red, 13.3% green and 84.7% blue. Whereas in a CMYK color space, it is composed of 92.6% cyan, 84.3% magenta, 0% yellow and 15.3% black. It has a hue angle of 234.6 degrees, a saturation of 86.2% and a lightness of 45.5%. #1022d8 color hex could be obtained by blending #2044ff with #0000b1. Closest websafe color is: #0033cc.\n\n• R 6\n• G 13\n• B 85\nRGB color chart\n• C 93\n• M 84\n• Y 0\n• K 15\nCMYK color chart\n\n#1022d8 color description : Strong blue.\n\n# #1022d8 Color Conversion\n\nThe hexadecimal color #1022d8 has RGB values of R:16, G:34, B:216 and CMYK values of C:0.93, M:0.84, Y:0, K:0.15. Its decimal value is 1057496.\n\nHex triplet RGB Decimal 1022d8 `#1022d8` 16, 34, 216 `rgb(16,34,216)` 6.3, 13.3, 84.7 `rgb(6.3%,13.3%,84.7%)` 93, 84, 0, 15 234.6°, 86.2, 45.5 `hsl(234.6,86.2%,45.5%)` 234.6°, 92.6, 84.7 0033cc `#0033cc`\nCIE-LAB 29.939, 60.773, -89.599 13.178, 6.211, 65.466 0.155, 0.073, 6.211 29.939, 108.265, 304.148 29.939, -9.234, -110.414 24.922, 50.771, -138.3 00010000, 00100010, 11011000\n\n# Color Schemes with #1022d8\n\n• #1022d8\n``#1022d8` `rgb(16,34,216)``\n• #d8c610\n``#d8c610` `rgb(216,198,16)``\nComplementary Color\n• #1086d8\n``#1086d8` `rgb(16,134,216)``\n• #1022d8\n``#1022d8` `rgb(16,34,216)``\n• #6210d8\n``#6210d8` `rgb(98,16,216)``\nAnalogous Color\n• #86d810\n``#86d810` `rgb(134,216,16)``\n• #1022d8\n``#1022d8` `rgb(16,34,216)``\n• #d86210\n``#d86210` `rgb(216,98,16)``\nSplit Complementary Color\n• #22d810\n``#22d810` `rgb(34,216,16)``\n• #1022d8\n``#1022d8` `rgb(16,34,216)``\n• #d81022\n``#d81022` `rgb(216,16,34)``\n• #10d8c6\n``#10d8c6` `rgb(16,216,198)``\n• #1022d8\n``#1022d8` `rgb(16,34,216)``\n• #d81022\n``#d81022` `rgb(216,16,34)``\n• #d8c610\n``#d8c610` `rgb(216,198,16)``\n• #0b1791\n``#0b1791` `rgb(11,23,145)``\n• #0c1ba9\n``#0c1ba9` `rgb(12,27,169)``\n• #0e1ec0\n``#0e1ec0` `rgb(14,30,192)``\n• #1022d8\n``#1022d8` `rgb(16,34,216)``\n• #1428ee\n``#1428ee` `rgb(20,40,238)``\n• #2c3def\n``#2c3def` `rgb(44,61,239)``\n• #4353f1\n``#4353f1` `rgb(67,83,241)``\nMonochromatic Color\n\n# Alternatives to #1022d8\n\nBelow, you can see some colors close to #1022d8. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #1054d8\n``#1054d8` `rgb(16,84,216)``\n• #1043d8\n``#1043d8` `rgb(16,67,216)``\n• #1033d8\n``#1033d8` `rgb(16,51,216)``\n• #1022d8\n``#1022d8` `rgb(16,34,216)``\n• #1011d8\n``#1011d8` `rgb(16,17,216)``\n• #1f10d8\n``#1f10d8` `rgb(31,16,216)``\n• #3010d8\n``#3010d8` `rgb(48,16,216)``\nSimilar Colors\n\n# #1022d8 Preview\n\nThis text has a font color of #1022d8.\n\n``<span style=\"color:#1022d8;\">Text here</span>``\n#1022d8 background color\n\nThis paragraph has a background color of #1022d8.\n\n``<p style=\"background-color:#1022d8;\">Content here</p>``\n#1022d8 border color\n\nThis element has a border color of #1022d8.\n\n``<div style=\"border:1px solid #1022d8;\">Content here</div>``\nCSS codes\n``.text {color:#1022d8;}``\n``.background {background-color:#1022d8;}``\n``.border {border:1px solid #1022d8;}``\n\n# Shades and Tints of #1022d8\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #01020f is the darkest color, while #fcfcff is the lightest one.\n\n• #01020f\n``#01020f` `rgb(1,2,15)``\n• #020521\n``#020521` `rgb(2,5,33)``\n• #040834\n``#040834` `rgb(4,8,52)``\n• #050b46\n``#050b46` `rgb(5,11,70)``\n• #070e58\n``#070e58` `rgb(7,14,88)``\n• #08116a\n``#08116a` `rgb(8,17,106)``\n• #09147d\n``#09147d` `rgb(9,20,125)``\n• #0b178f\n``#0b178f` `rgb(11,23,143)``\n• #0c19a1\n``#0c19a1` `rgb(12,25,161)``\n• #0d1cb3\n``#0d1cb3` `rgb(13,28,179)``\n• #0f1fc6\n``#0f1fc6` `rgb(15,31,198)``\n• #1022d8\n``#1022d8` `rgb(16,34,216)``\n• #1125ea\n``#1125ea` `rgb(17,37,234)``\n• #2133ef\n``#2133ef` `rgb(33,51,239)``\n• #3344f0\n``#3344f0` `rgb(51,68,240)``\n• #4555f1\n``#4555f1` `rgb(69,85,241)``\n• #5765f3\n``#5765f3` `rgb(87,101,243)``\n• #6a76f4\n``#6a76f4` `rgb(106,118,244)``\n• #7c87f5\n``#7c87f5` `rgb(124,135,245)``\n• #8e98f7\n``#8e98f7` `rgb(142,152,247)``\n• #a1a8f8\n``#a1a8f8` `rgb(161,168,248)``\n• #b3b9f9\n``#b3b9f9` `rgb(179,185,249)``\n• #c5cafb\n``#c5cafb` `rgb(197,202,251)``\n• #d7dbfc\n``#d7dbfc` `rgb(215,219,252)``\n• #eaebfd\n``#eaebfd` `rgb(234,235,253)``\n• #fcfcff\n``#fcfcff` `rgb(252,252,255)``\nTint Color Variation\n\n# Tones of #1022d8\n\nA tone is produced by adding gray to any pure hue. In this case, #727276 is the less saturated color, while #071be1 is the most saturated one.\n\n• #727276\n``#727276` `rgb(114,114,118)``\n• #696b7f\n``#696b7f` `rgb(105,107,127)``\n• #606488\n``#606488` `rgb(96,100,136)``\n• #575d91\n``#575d91` `rgb(87,93,145)``\n• #4e559a\n``#4e559a` `rgb(78,85,154)``\n• #464ea2\n``#464ea2` `rgb(70,78,162)``\n• #3d47ab\n``#3d47ab` `rgb(61,71,171)``\n• #343fb4\n``#343fb4` `rgb(52,63,180)``\n• #2b38bd\n``#2b38bd` `rgb(43,56,189)``\n• #2231c6\n``#2231c6` `rgb(34,49,198)``\n• #1929cf\n``#1929cf` `rgb(25,41,207)``\n• #1022d8\n``#1022d8` `rgb(16,34,216)``\n• #071be1\n``#071be1` `rgb(7,27,225)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #1022d8 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.543922,"math_prob":0.7049353,"size":3684,"snap":"2021-21-2021-25","text_gpt3_token_len":1670,"char_repetition_ratio":0.122826084,"word_repetition_ratio":0.011111111,"special_character_ratio":0.5662324,"punctuation_ratio":0.23809524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98861,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T03:43:45Z\",\"WARC-Record-ID\":\"<urn:uuid:a653b64b-012e-4298-bb64-39d4df52b1c0>\",\"Content-Length\":\"36265\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac03bdaf-95f3-4221-9c16-06de8f8adf7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3a0ce39-8c02-45d7-b3db-b70a843c3ba1>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/1022d8\",\"WARC-Payload-Digest\":\"sha1:QESTLXL5ED2ZLHRXIK7IKS6SQMMMIS6B\",\"WARC-Block-Digest\":\"sha1:Z7Y62TZMRU4WLRMHLJJGDL6CIHKDBN4P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991659.54_warc_CC-MAIN-20210516013713-20210516043713-00077.warc.gz\"}"}
https://www.physicsforums.com/threads/prove-cauchy-sequence-find-bounds-on-limit.88514/
[ "# Prove Cauchy sequence & find bounds on limit\n\nHere's the problem statement:\n\nProve that $x_1,x_2,x_3,...$ is a Cauchy sequence if it has the property that $|x_k-x_{k-1}|<10^{-k}$ for all $k=2,3,4,...$. If $x_1=2$, what are the bounds on the limit of the sequence?\n\nSomeone suggested that I use the triangle inequality as follows:\n\nlet $n=m+l$\n$$|a_n-a_m|=|a_{m+l}-a_m|$$\n$$|a_{m+l}-a_m|\\leq |a_{m+l}-a_{m+l-1}|+|a_{m+l-1}-a_{m+l-2}|+...+|a_{m+1}-a_m|$$\n\nNow by hypothesis, $|a_k-a_{k-1}|<10^{-k}$, so\n\n$$|a_{m+l}-a_m|<10^{-(m+l)}+10^{-(m+l-1)}+...+10^{-(m+1)}$$.\n\nIt looks like we have an $\\epsilon$ such that $|a_n-a_m|<\\epsilon$. Before we get to the bounds on the limit, is that correct? Is anything missing?\n\nTom Mattson\nStaff Emeritus\n$$|a_{m+l}-a_m|<\\sum_{i=0}^l 10^{m+l-i}$$\n$$|a_n-a_m|<\\sum_{i=0}^{n-m} 10^{n-i}$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6614479,"math_prob":0.999974,"size":1523,"snap":"2021-31-2021-39","text_gpt3_token_len":649,"char_repetition_ratio":0.18499012,"word_repetition_ratio":0.9044586,"special_character_ratio":0.43401182,"punctuation_ratio":0.15526316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000006,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T09:41:05Z\",\"WARC-Record-ID\":\"<urn:uuid:bc4951b9-4edd-4a5b-8e3d-cf0fa8c81634>\",\"Content-Length\":\"60067\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e079575c-322f-4cc7-8574-f9c87aea351b>\",\"WARC-Concurrent-To\":\"<urn:uuid:ddf41cee-2dc0-488a-af0d-23777f5b2d0a>\",\"WARC-IP-Address\":\"172.67.68.135\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/prove-cauchy-sequence-find-bounds-on-limit.88514/\",\"WARC-Payload-Digest\":\"sha1:D27J3EXRGAQSWCNVPAIMJMINB3O2LRI5\",\"WARC-Block-Digest\":\"sha1:CDRJ3NYTWNRZZ7PAAP2XZOSAMQ4OWIWQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057033.33_warc_CC-MAIN-20210920070754-20210920100754-00311.warc.gz\"}"}
http://www.numbersaplenty.com/9082
[ "Search a number\nBaseRepresentation\nbin10001101111010\n3110110101\n42031322\n5242312\n6110014\n735323\noct21572\n913411\n109082\n116907\n12530a\n134198\n14344a\n152a57\nhex237a\n\n9082 has 8 divisors (see below), whose sum is σ = 14400. Its totient is φ = 4284.\n\nThe previous prime is 9067. The next prime is 9091. The reversal of 9082 is 2809.\n\nIt is a sphenic number, since it is the product of 3 distinct primes.\n\nIt is a Harshad number since it is a multiple of its sum of digits (19).\n\nIt is a plaindrome in base 14 and base 16.\n\nIt is an unprimeable number.\n\nIt is a polite number, since it can be written in 3 ways as a sum of consecutive naturals, for example, 82 + ... + 157.\n\nIt is an arithmetic number, because the mean of its divisors is an integer number (1800).\n\n29082 is an apocalyptic number.\n\n9082 is a deficient number, since it is larger than the sum of its proper divisors (5318).\n\n9082 is a wasteful number, since it uses less digits than its factorization.\n\n9082 is an evil number, because the sum of its binary digits is even.\n\nThe sum of its prime factors is 260.\n\nThe product of its (nonzero) digits is 144, while the sum is 19.\n\nThe square root of 9082 is about 95.2995278058. The cubic root of 9082 is about 20.8638202534.\n\nThe spelling of 9082 in words is \"nine thousand, eighty-two\".\n\nDivisors: 1 2 19 38 239 478 4541 9082" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9271904,"math_prob":0.99541605,"size":1212,"snap":"2019-26-2019-30","text_gpt3_token_len":348,"char_repetition_ratio":0.16721854,"word_repetition_ratio":0.008695652,"special_character_ratio":0.34158415,"punctuation_ratio":0.14814815,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99677753,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-16T02:49:26Z\",\"WARC-Record-ID\":\"<urn:uuid:9d7a64ed-5b6e-4e1f-b79b-cb26fe3c8ece>\",\"Content-Length\":\"8005\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:565c2162-5794-4ee1-8ca7-2f7944a16585>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e316e12-475c-4bb4-a41d-42c41e9382bb>\",\"WARC-IP-Address\":\"62.149.142.170\",\"WARC-Target-URI\":\"http://www.numbersaplenty.com/9082\",\"WARC-Payload-Digest\":\"sha1:SHH4AP2C5NKGHP23KZ2CH7L2TYKTOUXB\",\"WARC-Block-Digest\":\"sha1:HF6M2NMU74G2ARYP3WCXLDHCULMSW7CG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627997533.62_warc_CC-MAIN-20190616022644-20190616044644-00500.warc.gz\"}"}
https://math.stackexchange.com/questions/3045439/repunits-whose-digits-in-base-b-are-all-b-1
[ "# Repunits whose digits in base $b$ are all $b-1$\n\nPositive integers whose base-$$b$$ representation contains only the digit $$1$$ are called repunits in that base. But what about positive integers whose base-$$b$$ representation contains only the digit $$b-1$$?\n\nFor instance, how would one call the base-$$20$$ number whose representation in that base is $$19\\cdot19\\cdot19\\cdot19\\cdot19_{20}?$$\n\nIs there a special name for this kind of number? I know repunits are useful in many number-theoretical contexts, but what about such numbers?\n\n• You mean like 99999 in base 10? – qwr Dec 18 '18 at 17:30\n• Probably not. They can be written as $b^{k}-1.$ – Thomas Andrews Dec 18 '18 at 17:30\n\n## 1 Answer\n\nWhile there is no name, to my knowledge, for the general case, they are referred to as Mersenne numbers for base 2." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90766764,"math_prob":0.9920812,"size":477,"snap":"2019-13-2019-22","text_gpt3_token_len":112,"char_repetition_ratio":0.13530655,"word_repetition_ratio":0.12307692,"special_character_ratio":0.24528302,"punctuation_ratio":0.08139535,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99228626,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-19T15:35:52Z\",\"WARC-Record-ID\":\"<urn:uuid:ba1b223b-3798-4100-90a5-e693943ca218>\",\"Content-Length\":\"130054\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:492f7708-7358-49c9-b49c-9ca75bf1c16d>\",\"WARC-Concurrent-To\":\"<urn:uuid:12e1ab52-3319-410c-a671-2dda5ce3df6f>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3045439/repunits-whose-digits-in-base-b-are-all-b-1\",\"WARC-Payload-Digest\":\"sha1:YOOXNDPBSUWJQEP2I2FBKUPH4ZTPBKOO\",\"WARC-Block-Digest\":\"sha1:BDH6H7UZ7LWFPW3DKQO7C2VZOHRXYETC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232254889.43_warc_CC-MAIN-20190519141556-20190519163556-00055.warc.gz\"}"}
https://zh.m.wikipedia.org/wiki/%E9%80%9F%E6%8E%A7%E6%AD%A5
[ "# 速率控制步驟\n\n(重定向自速控步\n\n## 應用\n\n1. $NO_{2}$  + $NO_{2}$ $NO$  + $NO_{3}$ (較慢)\n2. $NO_{3}$  + $CO$ $NO_{2}$  + $CO_{2}$ (較快)" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9951708,"math_prob":1.00001,"size":501,"snap":"2020-34-2020-40","text_gpt3_token_len":611,"char_repetition_ratio":0.12072434,"word_repetition_ratio":0.0,"special_character_ratio":0.17964073,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97104466,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T02:51:12Z\",\"WARC-Record-ID\":\"<urn:uuid:5385713b-b935-406b-a4f3-67cec1617544>\",\"Content-Length\":\"42132\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4ea4847c-85db-46df-8e4b-dbd979307ec2>\",\"WARC-Concurrent-To\":\"<urn:uuid:4754951d-749d-41ca-bd08-2070a2c80b1b>\",\"WARC-IP-Address\":\"208.80.153.224\",\"WARC-Target-URI\":\"https://zh.m.wikipedia.org/wiki/%E9%80%9F%E6%8E%A7%E6%AD%A5\",\"WARC-Payload-Digest\":\"sha1:QUSISDTU6WHPOVXJFLM6JDQVYTSTC4QM\",\"WARC-Block-Digest\":\"sha1:DKRNAXKP6VJ2NNNM7SYXPHFYHUVWWDDA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402130531.89_warc_CC-MAIN-20200930235415-20201001025415-00778.warc.gz\"}"}
http://mathcentral.uregina.ca/QandQ/topics/quartic%20polynomial
[ "", null, "", null, "Math Central - mathcentral.uregina.ca", null, "", null, "Quandaries & Queries", null, "", null, "", null, "", null, "Q & Q", null, "", null, "", null, "", null, "Topic:", null, "quartic polynomial", null, "", null, "", null, "start over\n\nOne item is filed under this topic.", null, "", null, "", null, "", null, "Page1/1", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "A quartic polynomial that is a perfect square 2012-02-05", null, "From archit:If P(x)=x^4+ax^3+bx^2-8x+1 is a perfect square then (a+b)=?Answered by Penny Nom.", null, "", null, "", null, "", null, "", null, "", null, "Page1/1", null, "", null, "", null, "", null, "Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.", null, "", null, "", null, "", null, "about math central :: site map :: links :: notre site français" ]
[ null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/boxes/whiteonwhite/topleft.gif", null, "http://mathcentral.uregina.ca/lid/images/boxes/whiteonwhite/topright.gif", null, "http://mathcentral.uregina.ca/lid/QQ/images/topic.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/boxes/whiteonwhite/bottomleft.gif", null, "http://mathcentral.uregina.ca/lid/images/boxes/whiteonwhite/bottomright.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_first.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_previous.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_next.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_last.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_first.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_previous.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_next.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_last.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/styles/mathcentral/interior/cms.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7103949,"math_prob":0.89899445,"size":497,"snap":"2020-24-2020-29","text_gpt3_token_len":146,"char_repetition_ratio":0.08924949,"word_repetition_ratio":0.0,"special_character_ratio":0.26559356,"punctuation_ratio":0.13333334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9802096,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T13:30:01Z\",\"WARC-Record-ID\":\"<urn:uuid:9a2415e6-fa5a-4f90-91c9-229867821cce>\",\"Content-Length\":\"13606\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a73cdc6-663b-4630-ac1e-912d7af1c27f>\",\"WARC-Concurrent-To\":\"<urn:uuid:4726b77b-61f1-4c66-a128-7866653f0f62>\",\"WARC-IP-Address\":\"142.3.156.43\",\"WARC-Target-URI\":\"http://mathcentral.uregina.ca/QandQ/topics/quartic%20polynomial\",\"WARC-Payload-Digest\":\"sha1:CCTRJTDCOE7YQSXW7SIEVLX6RBTO2FL5\",\"WARC-Block-Digest\":\"sha1:ZO3F2BOOWAKSQB6M3OUP6DPX2CSZHYEK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347394074.44_warc_CC-MAIN-20200527110649-20200527140649-00591.warc.gz\"}"}
https://www.dyclassroom.com/jquery-interview-questions/jquery-interview-questions-set-3
[ "# jQuery Interview Questions - Set 3\n\n### jQuery Interview Questions\n\nShare", null, "## Q1: How will you set the background color of all the paragraphs to black and text color to white using jQuery?\n\nTo accomplish this we will use the `css()` method.\n\nIn the following example we are setting the background color to black and text color to white for all the paragraphs.\n\n``````\\$('p').css({\n'background-color': 'black',\n'color': 'white'\n});\n``````\n\n## Q2: How will you change the font size and family to 20px and Arial respectively for a given div having id 'user-info' using jQuery?\n\nFor this we will first select the div by id then using the `css()` method we will set the font size and family to the mentioned values.\n\n``````\\$('div#user-info').css({\n'font-size': '20px',\n'font-family': 'Arial'\n});\n``````\n\n## Q3: What is the use of `each` function in jQuery?\n\nThe `each` is a generic function to iterate over arrays and objects.\n\nIn the following example we will print the elements of an array using the each function.\n\nNote! `index` holds the index of the element of the array and `value` is the element of the array.\n\n``````var arr = ['zero', 'one', 'two'];\n\\$.each(arr, function(index, value) {\nconsole.log(index + \" \" + value);\n});\n``````\n\nThe above code will print the following.\n\n``````0 zero\n1 one\n2 two\n``````\n\n## Q4: Write jQuery code to select elements having class 'super', 'awesome' and 'fabulous'\n\nFor this we can write the following jQuery code.\n\n``````var super_elems = \\$('.super');\nvar awesome_elems = \\$('.awesome');\nvar fabulous_elems = \\$('.fabulous');\n``````\n\nOr, we can store them all in one single variable.\n\n``````var elems = \\$('.super, .awesome, .fabulous');\n``````\n\n## Q5: Write jQuery code to handle click event for a button having id 'my-button'\n\nFor this we will use the `on()` method. This will take two arguments. The first one is the `click` as we are handling the click event. The second one is a function which will be executed when the button is clicked.\n\n``````\\$('#my-button').on('click', function() {\n// some code goes here...\n});\n``````\n\n## Q6: How will you check if an element having id 'info' has class 'loaded' using jQuery?\n\nFor this we will first select the element by id and then we will use the `hasClass()` method which will return `true` if the class is present. Otherwise, it will return `false`.\n\n``````// get the element\nvar el = \\$('#info');\n\n// now check the class\n} else {\n}\n``````\n\n## Q7: How will you get the selected option for the given select element using jQuery?\n\n``````<select id=\"item\">\n<option value=\"-\">--- Select Item ---</option>\n<option value=\"Apple\">Apple</option>\n<option value=\"Mango\">Mango</option>\n</select>\n``````\n\nFor this we will first select the element by id and then call the `val()` method to get the selected value.\n\n``````var selectedValue = \\$('#item').val();\n``````\n\n## Q8: How will you get the value of all the selected checkboxes using jQuery?\n\n``````<input type=\"checkbox\" name=\"fruit\" value=\"Apple\"> Apple\n<input type=\"checkbox\" name=\"fruit\" value=\"Mango\"> Mango\n<input type=\"checkbox\" name=\"fruit\" value=\"Orange\"> Orange\n``````\n\nFor this we will first select all the checkboxes then we will use the `each()` method to get the value of all the checked checkboxes.\n\n``````var checkbox = [];\n\\$('input[name=\"fruit\"]:checked').each(function() {\ncheckbox.push(this.value);\n});\nconsole.log(\"Checkbox selected: \" + checkbox);\n``````\n\n## Q9: How will you get the value of the selected radio button using jQuery?\n\n``````<input type=\"radio\" name=\"fruit\" value=\"Apple\"> Apple\n``````\n\nFor this we will first select the radio button that is checked and then using the `val()` method we will get the value of the selected radio button.\n\n``````var radio = \\$('input[name=\"fruit\"]:checked').val();\nFirst we will select the input email field using the id and then using the `val()` method we will get the value entered in the field.\n``````var email = \\$('#user-email').val();" ]
[ null, "https://www.dyclassroom.com/image/topic/interview-questions-jquery/logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5615306,"math_prob":0.7713787,"size":3357,"snap":"2019-51-2020-05","text_gpt3_token_len":795,"char_repetition_ratio":0.1410677,"word_repetition_ratio":0.1021611,"special_character_ratio":0.2838844,"punctuation_ratio":0.14196567,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95774084,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T23:44:41Z\",\"WARC-Record-ID\":\"<urn:uuid:c340af38-7a3b-4749-b0fd-23e90951e3e3>\",\"Content-Length\":\"60952\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bdf90acc-4e51-487c-8771-090c8d1f7de9>\",\"WARC-Concurrent-To\":\"<urn:uuid:06de8ac6-ac91-4461-b036-dc1cbc365a9f>\",\"WARC-IP-Address\":\"43.225.52.246\",\"WARC-Target-URI\":\"https://www.dyclassroom.com/jquery-interview-questions/jquery-interview-questions-set-3\",\"WARC-Payload-Digest\":\"sha1:6U6T336YPS6CRNGQVX3EMBOHC5DBJWPS\",\"WARC-Block-Digest\":\"sha1:Q4U4DJ4TVIBTQWBV7RNRPQA5OM5TTMJV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250601040.47_warc_CC-MAIN-20200120224950-20200121013950-00229.warc.gz\"}"}
https://blog.shrirambalaji.dev/rust-advent-of-code-2020-day-01
[ "Follow\n\n# Shriram Balaji's Blog\n\nFollow", null, "# Rust Advent of Code 2020 - Day 01\n\nShriram Balaji\n·Dec 1, 2020·\n\nSpoilers ahead for the solutions of Advent of Code 2020 - Day 01 in Rust. The full solution is available here.\n\nIf you are unfamiliar, Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as a speed contest, interview prep, company training, university coursework, practice problems, or to challenge each other.\n\nThe puzzle usually has a story, but we'd be looking at the actual problem we need to solve.\n\nPart 1: Given an array of numbers, find two values that sum to 2020. Once you find the two values, multiply them. This might sound familiar to many, and if it does - yes, its the popular Two Sum problem for folks used to leetcode-ing.\n\nThe solution looks something like this:\n\n``````\n/**\nThe algorithm:\n* initialize a map\n* go through the array of the elements ie. entries, and calculate its complement.\n* insert (complement, index) into the map if it doesnt exist.\n* if it doest exist, yay! we've found the indices that sum to 2020. Get the values from entries at those indices and return.\n**/\n\nlet mut map: HashMap<i32, usize> = HashMap::new();\n// `.iter()` returns an iterator, and `.enumerate` returns a tuple of (index, value) of every entry.\nfor (index, entry) in entries.iter().enumerate() {\nlet complement = TARGET_SUM - entry;\n// NOTE: `HashMap.contains_key` and `HashMap.get` take a reference and return a reference.\nif map.contains_key(&complement) {\nlet chosen_one_index = map.get(&complement).unwrap();\nlet chosen_two_index = &index;\n\n// we need to a deref here as `chosen_one_index` returns a pointer to a pointer.\nlet chosen_one = entries.get(*chosen_one_index as usize).unwrap();\nlet chosen_two = entries.get(*chosen_two_index as usize).unwrap();\n\nprintln!(\"2 entries that sum to 2020: {}, {}\", chosen_one, chosen_two);\nprintln!(\"Product Of two entries: {}\", chosen_one * chosen_two);\n} else {\nmap.insert(*entry, index);\n}\n}\n``````\n\nPart 1: Extended\n\nGiven an array of numbers, find three values that sum to 2020, and multiply them. The naive solution for this using a nested loop and then trying to find all possible triplets that sum to 2020. However, the complexity for something like that would be `O(n^3)` and we probably don't want that.\n\nA better solution could be to sort the array, and then use a Two pointer approach with a sliding window approach. The complexity for this solution would be `O(n log(n))` worst case, due to the sort.\n\n``````\nentries.sort();\n\nfor (i, entry) in entries.iter().enumerate() {\n// initialize the `low` pointer to the start, `high` to the end of the array.\n// the window in the beginning, is the entire array\nlet mut low = i + 1;\nlet mut high = entries.len() - 1;\n\n// bounds check\nwhile low < high {\n// calculate sum at the current index\nlet current_sum = &entries[low] + &entries[high] + entry;\n// since there's only one such entry based on the question, we can break here.\n// otherwise we'd typically push these into a Vec<u8> | HashSet<u8> to deal with duplicates.\nif current_sum == TARGET_SUM {\nprintln!(\n\"3 Entries that sum to 2020: {}, {}, {}\",\n&entries[low], &entries[high], entry\n);\n\nprintln!(\n\"Product of 3 Entries: {}\",\n&entries[low] * &entries[high] * entry\n);\n\nbreak;\n\n} else if current_sum < TARGET_SUM {\n// since the array is sorted, if the current sum is lesser we need to minimize / slide the window from the left.\n// so we increment the lower bound.\nlow += 1;\n} else {\n// the current sum is greater, lets reduce the upper bound instead.\nhigh -= 1;\n}\n}\n}\n``````" ]
[ null, "https://blog.shrirambalaji.dev/_next/image", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79511136,"math_prob":0.92994124,"size":3563,"snap":"2022-40-2023-06","text_gpt3_token_len":875,"char_repetition_ratio":0.11407699,"word_repetition_ratio":0.019966722,"special_character_ratio":0.28234634,"punctuation_ratio":0.17316018,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9922001,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-01T16:30:50Z\",\"WARC-Record-ID\":\"<urn:uuid:b9a62bd6-3f6b-4111-9e84-d588abc4908f>\",\"Content-Length\":\"433409\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7482a9c-5b8a-4591-9ff4-570f58d0c1aa>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c8677bb-17a8-496a-b679-d93d715f30f7>\",\"WARC-IP-Address\":\"76.76.21.21\",\"WARC-Target-URI\":\"https://blog.shrirambalaji.dev/rust-advent-of-code-2020-day-01\",\"WARC-Payload-Digest\":\"sha1:F57XQ3NELMEOWUFLH67PDTQMQCSIM6SC\",\"WARC-Block-Digest\":\"sha1:27TWAB4QSWL6QYW6R42KPAM36XGPYWGC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499946.80_warc_CC-MAIN-20230201144459-20230201174459-00414.warc.gz\"}"}
https://dir.md/wiki/Curry%E2%80%93Howard_isomorphism?host=en.wikipedia.org
[ "# Curry–Howard correspondence", null, "Isomorphism between computer programs and constructive mathematical proofs\n\nThe beginnings of the Curry–Howard correspondence lie in several observations:\n\nIn other words, the Curry–Howard correspondence is the observation that two families of seemingly unrelated formalisms—namely, the proof systems on one hand, and the models of computation on the other—are in fact the same kind of mathematical objects.\n\na proof is a program, and the formula it proves is the type for the program\n\nSpeculatively, the Curry–Howard correspondence might be expected to lead to a substantial unification between mathematical logic and foundational computer science:\n\nAt the level of formulas and types, the correspondence says that implication behaves the same as a function type, conjunction as a \"product\" type (this may be called a tuple, a struct, a list, or some other term depending on the language), disjunction as a sum type (this type may be called a union), the false formula as the empty type and the true formula as the singleton type (whose sole member is the null object). Quantifiers correspond to dependent function space or products (as appropriate). This is summarized in the following table:\n\nBetween the natural deduction system and the lambda calculus there are the following correspondences:\n\nIf one restricts to the implicational intuitionistic fragment, a simple way to formalize logic in Hilbert's style is as follows. Let Γ be a finite collection of formulas, considered as hypotheses. Then δ is derivable from Γ, denoted Γ ⊢ δ, in the following cases:\n\nThis can be formalized using inference rules, as in the left column of the following table.\n\nTyped combinatory logic can be formulated using a similar syntax: let Γ be a finite collection of variables, annotated with their types. A term T (also annotated with its type) will depend on these variables [Γ ⊢ T:δ] when:\n\nThanks to the correspondence, results from combinatory logic can be transferred to Hilbert-style logic and vice versa. For instance, the notion of reduction of terms in combinatory logic can be transferred to Hilbert-style logic and it provides a way to canonically transform proofs into other proofs of the same statement. One can also transfer the notion of normal terms to a notion of normal proofs, expressing that the hypotheses of the axioms never need to be all detached (since otherwise a simplification can happen).\n\nConversely, the non provability in intuitionistic logic of Peirce's law can be transferred back to combinatory logic: there is no typed term of combinatory logic that is typable with type\n\nSequent calculus is characterized by the presence of left introduction rules, right introduction rule and a cut rule that can be eliminated. The structure of sequent calculus relates to a calculus whose structure is close to the one of some abstract machines. The informal correspondence is as follows:\n\nThanks to the Curry–Howard correspondence, a typed expression whose type corresponds to a logical formula is analogous to a proof of that formula. Here are examples.\n\nSince the antecedent here is just S, the consequent can be detached using Modus Ponens:\n\nThis is the same as the antecedent of the prior formula so, detaching the consequent:\n\n``` a:β → α, b:γ → β, g:γ ⊢ b : γ → β a:β → α, b:γ → β, g:γ ⊢ g : γ\n——————————————————————————————————— ————————————————————————————————————————————————————————————————————\na:β → α, b:γ → β, g:γ ⊢ a : β → α a:β → α, b:γ → β, g:γ ⊢ b g : β\n———————————————————————————————————————————————————————————————————————— a:β → α, b:γ → β, g:γ ⊢ a (b g) : α ———————————————————————————————————— a:β → α, b:γ → β ⊢ λ g. a (b g) : γ → α ———————————————————————————————————————— a:β → α ⊢ λ b. λ g. a (b g) : (γ → β) -> γ → α ———————————————————————————————————— ⊢ λ a. λ b. λ g. a (b g) : (β → α) -> (γ → β) -> γ → α\n```" ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/71a95e1bc8374e6acb4cb602d4797868594621a5", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.831555,"math_prob":0.83925956,"size":3780,"snap":"2022-27-2022-33","text_gpt3_token_len":871,"char_repetition_ratio":0.1840572,"word_repetition_ratio":0.08716323,"special_character_ratio":0.2867725,"punctuation_ratio":0.13361463,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99444914,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T14:54:57Z\",\"WARC-Record-ID\":\"<urn:uuid:d0f096d4-9075-45e0-b2c9-d994b0817641>\",\"Content-Length\":\"9639\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0c279c79-de0c-4f3a-8eb7-2245bc1277b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f12848c-2bf6-4482-aee6-e2e3f79f5e57>\",\"WARC-IP-Address\":\"104.21.23.236\",\"WARC-Target-URI\":\"https://dir.md/wiki/Curry%E2%80%93Howard_isomorphism?host=en.wikipedia.org\",\"WARC-Payload-Digest\":\"sha1:UK6AHUS2OEATFUAG3Z6QKJ34Y25TENDG\",\"WARC-Block-Digest\":\"sha1:PW3KCEIZUGVXE5LHGVQHH2KFUAUOKLVY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104244535.68_warc_CC-MAIN-20220703134535-20220703164535-00615.warc.gz\"}"}
https://www.acmicpc.net/problem/21263
[ "시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율\n1 초 512 MB 0 0 0 0.000%\n\n## 문제\n\nCurling is a sport in which players slide stones on a sheet of ice toward a target area. The team with the nearest stone to the center of the target area wins the game.\n\nTwo teams, Red and Blue, are competing on the number axis. After the game there are $(n+m)$ stones remaining on the axis, $n$ of them for the Red team and the other $m$ of them for the Blue. The $i$-th stone of the Red team is positioned at $a_i$ and the $i$-th stone of the Blue team is positioned at $b_i$.\n\nLet $c$ be the position of the center of the target area. From the description above we know that if there exists some $i$ such that $1 \\le i \\le n$ and for all $1 \\le j \\le m$ we have $|c - a_i| < |c - b_j|$ then Red wins the game. What's more, Red is declared to win $p$ points if the number of $i$ satisfying the constraint is exactly $p$.\n\nGiven the positions of the stones for team Red and Blue, your task is to determine the position $c$ of the center of the target area so that Red wins the game and scores as much as possible. Note that $c$ can be any real number, not necessarily an integer.\n\n## 입력\n\nThere are multiple test cases. The first line of the input contains an integer $T$ indicating the number of test cases. For each test case:\n\nThe first line contains two integers $n$ and $m$ ($1 \\le n, m \\le 10^5$) indicating the number of stones for Red and the number of stones for Blue.\n\nThe second line contains $n$ integers $a_1, a_2, \\cdots, a_n$ ($1 \\le a_i \\le 10^9$) indicating the positions of the stones for Red.\n\nThe third line contains $m$ integers $b_1, b_2, \\cdots, b_m$ ($1 \\le b_i \\le 10^9$) indicating the positions of the stones for Blue.\n\nIt's guaranteed that neither the sum of $n$ nor the sum of $m$ will exceed $5 \\times 10^5$.\n\n## 출력\n\nFor each test case output one line. If there exists some $c$ so that Red wins and scores as much as possible, output one integer indicating the maximum possible score of Red (NOT $c$). Otherwise output \"Impossible\" (without quotes) instead.\n\n## 예제 입력 1\n\n3\n2 2\n2 3\n1 4\n6 5\n2 5 3 7 1 7\n3 4 3 1 10\n1 1\n7\n7\n\n\n## 예제 출력 1\n\n2\n3\nImpossible\n\n\n## 힌트\n\nFor the first sample test case we can assign $c = 2.5$ so that the stones at position 2 and 3 for Red will score.\n\nFor the second sample test case we can assign $c = 7$ so that the stones at position 5 and 7 for Red will score." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8365084,"math_prob":0.99866784,"size":2322,"snap":"2021-04-2021-17","text_gpt3_token_len":704,"char_repetition_ratio":0.14797239,"word_repetition_ratio":0.07494646,"special_character_ratio":0.30706286,"punctuation_ratio":0.074074075,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99672633,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-13T16:36:11Z\",\"WARC-Record-ID\":\"<urn:uuid:3836daf7-a5dc-4f7f-ab04-ce670d4a2512>\",\"Content-Length\":\"27383\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4a61cc2b-4004-479e-8014-c8d3dfb076b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:a7296498-3e36-488d-978c-e10df1183fe7>\",\"WARC-IP-Address\":\"52.194.157.171\",\"WARC-Target-URI\":\"https://www.acmicpc.net/problem/21263\",\"WARC-Payload-Digest\":\"sha1:WVJLVLTIMZFM7OR6OPIJH7QITKGHMGK5\",\"WARC-Block-Digest\":\"sha1:FS23RV455MA2CVM4Z2RMC6EZ7YO6Q5DR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038073437.35_warc_CC-MAIN-20210413152520-20210413182520-00433.warc.gz\"}"}
https://fr.mathworks.com/help/deeplearning/ref/divideblock.html
[ "Main Content\n\n# divideblock\n\nDivide targets into three sets using blocks of indices\n\n## Syntax\n\n```[trainInd,valInd,testInd] = divideblock(Q,trainRatio,valRatio,testRatio) ```\n\n## Description\n\n`[trainInd,valInd,testInd] = divideblock(Q,trainRatio,valRatio,testRatio)` separates targets into three sets: training, validation, and testing. It takes the following inputs:\n\n `Q` Number of targets to divide up. `trainRatio` Ratio of targets for training. Default = `0.7`. `valRatio` Ratio of targets for validation. Default = `0.15`. `testRatio` Ratio of targets for testing. Default = `0.15`.\n\nand returns\n\n `trainInd ` Training indices `valInd` Validation indices `testInd` Test indices\n\n## Examples\n\n```[trainInd,valInd,testInd] = divideblock(3000,0.6,0.2,0.2); ```\n\n## Network Use\n\nHere are the network properties that define which data division function to use, what its parameters are, and what aspects of targets are divided up, when `train` is called.\n\n```net.divideFcn net.divideParam net.divideMode ```\n\n## See Also\n\nIntroduced in R2008a\n\nDownload ebook" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79581684,"math_prob":0.8483732,"size":850,"snap":"2021-04-2021-17","text_gpt3_token_len":205,"char_repetition_ratio":0.15602838,"word_repetition_ratio":0.0,"special_character_ratio":0.19764706,"punctuation_ratio":0.22012578,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9904884,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-15T07:42:57Z\",\"WARC-Record-ID\":\"<urn:uuid:0a7d0125-f2ac-4e86-8a00-d4be9b4c58b6>\",\"Content-Length\":\"66939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e56f112-8f12-49fe-a63f-86ee81c08d8f>\",\"WARC-Concurrent-To\":\"<urn:uuid:c835fe6d-da7d-4f45-ad32-309bc9bbca3c>\",\"WARC-IP-Address\":\"184.25.198.13\",\"WARC-Target-URI\":\"https://fr.mathworks.com/help/deeplearning/ref/divideblock.html\",\"WARC-Payload-Digest\":\"sha1:SSZWAZYNYQCHFVOC64NT53OHGJCFWASQ\",\"WARC-Block-Digest\":\"sha1:ZCGGKK63JQ5C2QAUEU65CCY6GBY6A4LR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038084601.32_warc_CC-MAIN-20210415065312-20210415095312-00012.warc.gz\"}"}
https://physics.stackexchange.com/questions/247471/does-the-position-time-graph-have-to-be-a-smooth-function
[ "# Does the position-time graph have to be a smooth function?\n\nIf at some time $t$ there were a discontinuity in the velocity-time graph, then the acceleration would be infinite at $t$. So intuitively, it seems that the velocity-time graph must be continuous. I was wondering if all derivatives of the position-time graph are continuous functions (i.e. if the position-time graph is smooth) and if there was a way to prove it.\n\n• If it take a perfectly rigid ball (which is practically not possible), and bounce it off a perfectly hard surface, then it's velocity will be discontinuous in time. Are you asking whether such a situation is practically possible? – theindigamer Apr 5 '16 at 5:35\n• I know that for velocity to be discontinuous there must be infinite rigidity, infinite force, or something else impractical. I am asking if all derivatives of position, i.e. acceleration, jerk, etc. also must be continuous. – Rogue Autodidact Apr 5 '16 at 6:11\n• I think this is an interesting question. People are often very casual about how differentiable things need to be, and it is certainly useful to actually think about it carefully. My intuition is that everything is at least smooth but I don't know why I think that. One reason to ask for more than smooth is that, to do physics, you need to be able to approximate things in some nice way, by some power series say, and you need that series to converge. Well, if it's a power series then things need to be analytic. – tfb Apr 5 '16 at 6:45\n• Related: physics.stackexchange.com/q/151399/2451 and links therein. – Qmechanic Apr 5 '16 at 6:48\n\n## 2 Answers\n\nAs you said, the next derivative of the velocity with respect to time is the acceleration. And the acceleration could in principle have a step somewhere due to a force starting to act on the object.\n\n• But doesn't the force need to increase continuously as opposed to having a step? For example, if I apply a force to an object by pushing it, wouldn't I have to continuously increase the force applied from zero to some nonzero value (as opposed to directly jumping from zero to some nonzero value)? – Rogue Autodidact Apr 5 '16 at 4:15\n• @RogueAutodidact: if you think of all forces as deriving from some sort of field theory, sure. But no inconsistency is introduced into the Newtonian framework by allowing discontinuities in the acceleration. – Jerry Schirmer Apr 5 '16 at 4:39\n• @JerrySchirmer if you allow acceleration to be discontinuous you lose determinism in classical mechanics: that's often regarded as bad, although not everyone would regard it as so. (See Norton's dome.) – tfb Apr 5 '16 at 7:04\n\nIf you plot the time versus position of 'the high point on a see-saw', there is an abrupt change when end A goes from high to low (while end B goes from low to high). This is not entirely trickery, there are lots of useful items that exploit some kind of discontinuity (a toggle switch or an electronic astable 'flip-flop').\n\nWhat happens when light reflects from a mirror? How can we deny that the path of the light is sharply kinked, i.e. 'not smooth' or that the velocity abruptly reverses, which (if a point particle were involved) would imply infinite acceleration?\n\nAs Newtonian mechanics applies to an object, so center-of-mass motion of the object is always smooth because Newton's laws apply; a 'reflection' of a ball occurs by distortion of the ball's shape over a short timespan, and that distortion generates force just like compressing a spring, and the force accelerates the ball. This ought not to be generalized, however. Some things are beyond the scope of Newtonian mechanics." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94754624,"math_prob":0.8015857,"size":3717,"snap":"2019-35-2019-39","text_gpt3_token_len":878,"char_repetition_ratio":0.11176946,"word_repetition_ratio":0.025682183,"special_character_ratio":0.23298359,"punctuation_ratio":0.106591865,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9851395,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-18T21:14:15Z\",\"WARC-Record-ID\":\"<urn:uuid:b0cb6ed9-c31c-446f-bd17-9dc4b213cf75>\",\"Content-Length\":\"146315\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:59e88d5a-e5d5-448c-a9e1-37cc71c73b19>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c0d486b-cadd-4bc0-a871-f16c638caf29>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/247471/does-the-position-time-graph-have-to-be-a-smooth-function\",\"WARC-Payload-Digest\":\"sha1:7XWG5WHWKL6DVUX5THDXFPUT4RSVZT4A\",\"WARC-Block-Digest\":\"sha1:VLDTIA3K6ZB7HTBGNOHIA4UKYXRP2D6O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027314130.7_warc_CC-MAIN-20190818205919-20190818231919-00317.warc.gz\"}"}
https://math.stackexchange.com/questions/1051092/is-f-even-or-odd-what-is-the-period-of-it
[ "# Is F even or odd, what is the period of it?\n\n$$F(\\theta)=\\sin(\\theta)\\int_{-l}^{l} e^{-ikz\\cos \\theta} h(z)\\,dz$$ We know that $F(\\theta)$ is defined on $0\\le \\theta \\le \\pi$ and $h(z)$ is defined on $|z|\\le l$\nWhat is the period of $F(\\theta)$?\nIs $F(\\theta)$ even or odd?\nWhat if we change the dummy variable $z$ in the integral? does it affect the aspect of being even or odd of the function?\nIs the function $e^{-ikz\\cos \\theta}$ standalone even or odd for variable $\\theta$?\nI know the even function is $F(\\theta)=F(-\\theta)$ and odd function is $F(\\theta)=-F(\\theta)$ but I am confused in this particular example!\n• $z$ is real, but it might be complex, don't ask such questions, suppose we're not sure, answer in both cases! – FreeMind Dec 4 '14 at 7:42\n• @FreeMind: \"don't ask such questions\"? It seems reasonable to assume that $z$ is real, but prudent to ask (since $z$ is often used for a complex variable). However, if $z$ is complex, then issues of the path of integration arise that will need to be answered. – robjohn Dec 4 '14 at 14:30\n• @FreeMind: For $F(\\theta)$ to be even or odd, it must be defined on a domain symmetric about $0$ since questions about oddness and evenness depend on comparing $F(\\theta)$ and $F(-\\theta)$. This cannot be done on $[0,\\pi]$. – robjohn Dec 4 '14 at 14:32\nThe integral, being a function of $\\cos(\\theta)$, is an even function of $\\theta$. Multiplying an even function by an odd function, $\\sin(\\theta)$, gives an odd function. Therefore, if the domain were extended to $[-\\pi,\\pi]$, then the function would be odd.\nSince $\\sin(\\theta)$ and $\\cos(\\theta)$ have a period of $2\\pi$, $F(\\theta)$ will have a period which is an integral divisor of $2\\pi$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8281243,"math_prob":0.99986124,"size":2112,"snap":"2019-35-2019-39","text_gpt3_token_len":661,"char_repetition_ratio":0.15749526,"word_repetition_ratio":0.15178572,"special_character_ratio":0.33001894,"punctuation_ratio":0.104166664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000039,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T22:28:15Z\",\"WARC-Record-ID\":\"<urn:uuid:2373e64a-81f9-42da-bc0e-3e841d79704f>\",\"Content-Length\":\"140297\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1bb1b86-b901-45c7-9e4e-5959dd72d6a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:0becf97e-999b-450d-bad2-fb43a67c16f8>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1051092/is-f-even-or-odd-what-is-the-period-of-it\",\"WARC-Payload-Digest\":\"sha1:TVCIGQNFG4UAESABSIHGHRYHJMQ46NDT\",\"WARC-Block-Digest\":\"sha1:7M2JZ4VOXJYWBDQUKHKWW3Z6GN7XIGT3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574084.88_warc_CC-MAIN-20190920221241-20190921003241-00240.warc.gz\"}"}
https://www.hpmuseum.org/forum/showthread.php?mode=threaded&tid=17432&pid=151764
[ "PYTHON program not work\n09-05-2021, 04:54 PM\nPost: #5\n robmio", null, "Member Posts: 169 Joined: Jan 2020\nRE: PYTHON program not work\nThanks so much for your reply, Albert. Reflecting on the algorithm for the calculation of CDF_Roy, the obstacle is in the calculation of the determinant. The result varies according to the precision with which the values of the matrix “A” are calculated. For example: to obtain the true result of “CDF_Roy (8,15,5,0.959)”, it would take a calculation precision comprising several hundred values after the decimal point. In short, the values that make up the matrix \"A\" should have at least 100 or more decimal precision, so that the right result is obtained.\nEven using your suggestion, which consists in calculating the beta function with continuous fractions and with “loggamma”, the problem arises again in the calculation of the determinant of the matrix “A” (see \"RETURN √(DET(A));\" at the bottom of the algorithm).\nCode:\n#cas CDF_Roy(s,m,n,theta):= BEGIN LOCAL A, ii, j, b, a, adzero, aa, cc; A:=MAKEMAT(0,s,s); b:=MAKELIST(0,x,1,s); cc(x):=sqrt(π)*Gamma((2*m+2*n+s+x+2)/ 2)/(Gamma((2*m+x+1)/2)*Gamma((2*n+x+ 1)/2)*Gamma(x/2)); FOR ii FROM 1 TO s DO b:=REPLACE(b,ii,{(Beta(m+ii,n+1,theta) ^2)/2}); FOR j FROM ii TO s-1 DO b:=REPLACE(b,j+1,{(m+j)/(m+j+n+1)* b(j)-Beta(2*m+ii+j,2*n+2,theta)/ (m+j+n+1)}); a:=(Beta(m+ii,n+1,theta)*Beta(m+j+1, n+1,theta)-2*b(j+1))*cc(ii)* cc(j+1); A:=REPLACE(A,{ii,j+1},[[a]]); END; END; aa:={}; FOR ii FROM 1 TO s DO aa:=CONCAT(aa,{Beta(m+ii,n+1,theta)* cc(ii)}); END; aa:=ListToMat(aa); adzero:=MAKELIST(0,x,1,s+1,1); adzero:=ListToMat(adzero); IF odd(s)==1 THEN A:=ADDCOL(A,aa,s+1); A:=ADDROW(A,adzero,s+1); END; A:=A-TRN(A); RETURN √(DET(A)); END; #end\n\nHowever, in this post, my question is: “how can I get the result of a program written with PYTHON in an HPPPL environment without going through 'the terminal'?”.\nBest regards, Roberto\n « Next Oldest | Next Newest »\n\n Messages In This Thread PYTHON program not work - robmio - 09-05-2021, 05:59 AM RE: PYTHON program not work - robmio - 09-05-2021, 08:54 AM RE: PYTHON program not work - robmio - 09-05-2021, 11:06 AM RE: PYTHON program not work - Albert Chan - 09-05-2021, 03:58 PM RE: PYTHON program not work - robmio - 09-05-2021 04:54 PM\n\nUser(s) browsing this thread: 1 Guest(s)" ]
[ null, "https://www.hpmuseum.org/forum/images/buddy_offline.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8599455,"math_prob":0.99181753,"size":1505,"snap":"2023-40-2023-50","text_gpt3_token_len":412,"char_repetition_ratio":0.15189873,"word_repetition_ratio":0.11067194,"special_character_ratio":0.29900333,"punctuation_ratio":0.14332248,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976647,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T05:27:01Z\",\"WARC-Record-ID\":\"<urn:uuid:247dca6e-3c9f-473c-a541-8ff7bf0a03f3>\",\"Content-Length\":\"18950\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a63d6165-ebb5-4090-9944-e9b372fe3b88>\",\"WARC-Concurrent-To\":\"<urn:uuid:e43109b4-b5c9-4e01-91ef-0865f10c8381>\",\"WARC-IP-Address\":\"216.92.167.20\",\"WARC-Target-URI\":\"https://www.hpmuseum.org/forum/showthread.php?mode=threaded&tid=17432&pid=151764\",\"WARC-Payload-Digest\":\"sha1:YJ3EILCBJCCULULJDMBBPMHYQMKYSE5K\",\"WARC-Block-Digest\":\"sha1:T5PLQF3O4V3GLD77FDI3GCATJC4VEEHG\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100164.87_warc_CC-MAIN-20231130031610-20231130061610-00004.warc.gz\"}"}
https://image.hanspub.org/xml/41976.xml
[ "Cantor集是由德国数学家格奥尔格·康托尔在1883年引入的。因其构思精巧且性质独特,Cantor集应用广泛并且为众多数学问题提供了解决的思路和方法。而Cantor函数是由Cantor集构造的,它的特异性质也可以应用在很多数学问题中。它们特殊奇妙的性质使得其获得无与伦比的魅力,吸引了众多数学工作者对它进行探索和研究。本文从Cantor集与Cantor函数的构造出发,重点探讨了Cantor集与Cantor函数的性质。 The Cantor set was introduced by the German mathematician Georg Cantor in 1883. Because of its ingenious conception and unique nature, Cantor set is widely used and provides ideas and methods for solving many mathematical problems. Cantor function is constructed from Cantor set, and its special properties can also be applied to many mathematical problems. Their special and wonderful nature makes them incomparable charm, which attracts many mathematicians to explore and study them. Starting from the construction of Cantor set and Cantor function, this paper mainly discusses the properties of Cantor set and Cantor function.\n\nCantor集,Cantor函数,Cantor集性质,Cantor函数性质, Cantor Set Cantor Function Cantor Set Properties Cantor Function Properties\n\nCantor集是由德国数学家格奥尔格·康托尔在1883年引入的。因其构思精巧且性质独特,Cantor集应用广泛并且为众多数学问题提供了解决的思路和方法。而Cantor函数是由Cantor集构造的,它的特异性质也可以应用在很多数学问题中。它们特殊奇妙的性质使得其获得无与伦比的魅力,吸引了众多数学工作者对它进行探索和研究。本文从Cantor集与Cantor函数的构造出发,重点探讨了Cantor集与Cantor函数的性质。\n\nCantor集,Cantor函数,Cantor集性质,Cantor函数性质\n\nResearch on the Properties of Cantor Sets and Cantor Functions\n\nJiaxuan Luan\n\nSchool of Mathematics, Liaoning Normal University, Dalian Liaoning", null, "Received: Mar. 25th, 2021; accepted: Apr. 15th, 2021; published: Apr. 28th, 2021", null, "ABSTRACT\n\nThe Cantor set was introduced by the German mathematician Georg Cantor in 1883. Because of its ingenious conception and unique nature, Cantor set is widely used and provides ideas and methods for solving many mathematical problems. Cantor function is constructed from Cantor set, and its special properties can also be applied to many mathematical problems. Their special and wonderful nature makes them incomparable charm, which attracts many mathematicians to explore and study them. Starting from the construction of Cantor set and Cantor function, this paper mainly discusses the properties of Cantor set and Cantor function.\n\nKeywords:Cantor Set, Cantor Function, Cantor Set Properties, Cantor Function Properties", null, "", null, "1. 引言\n\nCantor集是由德国数学家格奥尔格·康托尔在1883年引入的。因其构思精巧且性质独特,Cantor集应用广泛并且为众多数学问题提供了解决的思路和方法。而且,康托尔和其他数学家通过考虑Cantor集奠定了现代点集拓扑学的基础。而Cantor函数是定义在 [ 0 , 1 ] 上的一个连续的、单调递增的、几乎处处可微的函数,在实际问题的求解中扮演着重要的角色,而且Cantor函数特殊奇妙的性质使得其获得无与伦比的魅力,吸引了众多数学工作者对它进行探索和研究。如它不满足Newton-Leibniz公式,它可以将疏朗集映成连续区间,它在 [ 0 , 1 ] 上的导数 Θ ′ ( x ) 几乎处处等于0,即 Θ ′ ( x ) = 0 a.e.于 [ 0 , 1 ] 且 ∫ [ 0 , 1 ] Θ ′ ( x ) d m = 0 等。如此多特殊且奇妙的性质使得Cantor函数有着无与伦比的魅力吸引了众多数学工作者对它进行探索和研究。\n\n2. Cantor集2.1. Cantor集构造\n\nCantor集是数学领域中一个十分重要的点集,也称为Cantor三分集或者Cantor完全集。有界闭区间 [ a , b ] ( a < b ) 上的Cantor集的构造思想如下:\n\nF 1 = F ( [ a , b ] ) = [ a , a + b − a 3 ] ∪ [ b − b − a 3 , b ] ;\n\nF 2 = F ( F ( [ a , b ] ) ) = [ a , a + b − a 3 2 ] ∪ [ a + 2 ( b − a ) 3 2 , a + 3 ( b − a ) 3 2 ]                                                         ∪ [ b − 3 ( b − a ) 3 2 , b − 2 ( b − a ) 3 2 ] ∪ [ b − b − a 3 2 , b ] .\n\nF k = F ( F k − 1 ( [ a , b ] ) ) = [ a , a + b − a 3 k ] ∪ [ a + 2 ( b − a ) 3 k , a + 3 ( b − a ) 3 k ] ∪ ⋯   ∪ [ b − ( 3 k − 3 ) ( b − a ) 3 k , b − ( 3 k − 2 ) ( b − a ) 3 k ] ∪ [ b − ( 3 k − 1 ) ( b − a ) 3 k , b ] .\n\nC = lim k → ∞ F k ( [ a , b ] ) .\n\n2.2. Cantor集性质及证明\n\nφ ( t ) = ( 1 − t ) a + t b , t ∈ [ 0 , 1 ]\n\nG = ∪ i = 1 ∞ ( [ 0 , 1 ] \\ F i ( [ 0 , 1 ] ) )\n\n∑ i = 1 ∞ 2 i − 1 3 i = 1.\n\nC = ∩ i = 1 ∞ F i ,\n\nm * ( C ) ≤ m * ( F i ) ≤ 2 i ⋅ 3 − i ,\n\n3. Cantor函数3.1. Cantor函数构造\n\n( 1 3 k , 2 3 k ) , ( 7 3 k , 8 3 k ) , ( 19 3 k , 20 3 k ) , ⋯ , ( 3 k − 2 3 k , 3 k − 1 3 k )\n\n1 2 k , 3 2 k , 5 2 k , ⋯ , 2 k − 1 2 k .\n\nΘ ( x ) = sup { Θ ( t ) | t ∈ G , t < x } ,\n\nΘ ( x ) = { 0 x = 0 1 x = 1 1 2 x ∈ ( 1 3 , 2 3 ) 1 2 2 x ∈ ( 1 3 2 , 2 3 2 ) 3 2 2 x ∈ ( 7 3 2 , 8 3 2 )   ⋮                   ⋮ 1 2 k x ∈ ( 1 3 k , 2 3 k ) 3 2 k x ∈ ( 7 3 k , 8 3 k ) 5 2 k x ∈ ( 19 3 k , 20 3 k )   ⋮                   ⋮ 2 k − 1 2 k x ∈ ( 3 k − 2 3 k , 3 k − 1 3 k )   ⋮                   ⋮ sup { Θ ( t ) | t ∈ G , t < x } x ∈ C , x ∉ { 0 , 1 }\n\n3.2. Cantor函数性质及证明\n\n[ x 1 , y 1 ] , [ x 2 , y 2 ] , ⋯ , [ x p , y p ] ,\n\np = 2 k ,\n\n0 = x 1 < y 1 ≤ x 2 < y 2 ≤ ⋯ ≤ x p < y p = 1 ,\n\ny i − x i = 1 3 k ,     i = 1 , 2 , ⋯ , p .\n\n∑ i = 1 k | y i − x i | = p 3 k = 2 k 3 k < δ .\n\n0 = Θ ( x 1 ) < Θ ( y 1 ) = Θ ( x 2 ) < Θ ( y 2 ) = ⋯ = Θ ( x p ) < Θ ( y p ) = 1 ,\n\n∑ i = 1 k | Θ ( y i ) − Θ ( x i ) | = 1 > 1 2 = ε 0 .\n\n4. Cantor集和Cantor函数的应用\n\nCantor集和Cantor函数的构思巧妙,性质十分特别。Cantor集是Cantor在解三角级数问题时做出来的,它具有若干重要特征。Cantor函数是由Cantor集构造的,它的特异性质也可以应用在很多数学问题中。\n\n4.1. Cantor集可作为否定命题的反例\n\ni) Lebesgue可测集一定是Borel集 。\n\nCantor集不为Borel集且它的外测度为零,故为Lebesgue可测集。\n\nii) 测度为零的集合一定是可列集 。\n\niii) 疏朗集均为孤立点集。\n\nCantor集为疏朗集,但 C ′ = C 是完全集,故不为孤立点集。\n\niv) 不含有孤立点的非空闭集一定含有内点。\n\nCantor是不含有孤立点的非空闭集,但由于其还是疏朗集,故其没有内点。\n\n4.2. Cantor函数可作为否定命题的反例\n\ni) 在连续映射下可测集的象一定可测。\n\nCantor函数 Θ ( x ) ,令 f ( x ) = x + Θ ( x ) 2 ,则 f : [ 0 , 1 ] → [ 0 , 1 ] 为严格单调增的连续函数。并使得 m ( f ( C ) ) = 1 2 ,其中C为Cantor集,取不可测集 W ⊂ f ( C ) ,则有 f − 1 ( W ) ⊂ C 可测,使 f ( f − 1 ( W ) ) = W 不可测。\n\nii) 在连续映射下可测集的原象一定可测。\n\niii) 所有连续函数均满足Newton-Leibniz公式。\n\nCantor函数 Θ ( x ) 不满足Newton-Leibniz公式。Cantor函数 Θ ( x ) 在 [ 0 , 1 ] 上几乎处处可微,且 Θ ′ ( x ) = 0 a.e.于 [ 0 , 1 ] ,在 [ 0 , 1 ] 上几乎处处连续,故R-可积。但是\n\n∫ 0 1 Θ ′ ( x ) d x = 0 < 1 = Θ ( 1 ) − Θ ( 0 ) ." ]
[ null, "https://html.hanspub.org/file/44-2621573x4_hanspub.png", null, "https://html.hanspub.org/file/44-2621573x5_hanspub.png", null, "https://html.hanspub.org/file/44-2621573x7_hanspub.png", null, "https://html.hanspub.org/file/44-2621573x8_hanspub.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.6342802,"math_prob":0.9953663,"size":8663,"snap":"2021-21-2021-25","text_gpt3_token_len":5630,"char_repetition_ratio":0.1816607,"word_repetition_ratio":0.37246248,"special_character_ratio":0.45446151,"punctuation_ratio":0.12691218,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98857087,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-12T20:11:09Z\",\"WARC-Record-ID\":\"<urn:uuid:f4ca1f56-6f73-413f-9105-1b8f8ec739fb>\",\"Content-Length\":\"20733\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:acda3b94-b94c-46d0-aa75-a349bb5275da>\",\"WARC-Concurrent-To\":\"<urn:uuid:a447330e-6519-419e-91b2-264061f3e9c4>\",\"WARC-IP-Address\":\"118.186.244.130\",\"WARC-Target-URI\":\"https://image.hanspub.org/xml/41976.xml\",\"WARC-Payload-Digest\":\"sha1:GEK5LAF2P7HAPD5VZWVXFBP3FLLXCBN5\",\"WARC-Block-Digest\":\"sha1:HXXADUF3YDUKNMV3W5XBKWNUWEX74MJH\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989705.28_warc_CC-MAIN-20210512193253-20210512223253-00367.warc.gz\"}"}
http://trac.sasview.org/browser/sasmodels/sasmodels/models/fcc_paracrystal.py?annotate=blame&rev=b297ba9e837aada12de87cd46ea7a01bfe549da7
[ "# source:sasmodels/sasmodels/models/fcc_paracrystal.py@b297ba9\n\ncore_shell_microgelsmagnetic_modelticket-1257-vesicle-productticket_1156ticket_1265_superballticket_822_more_unit_tests\nLast change on this file since b297ba9 was b297ba9, checked in by Paul Kienzle <pkienzle@…>, 8 months ago\n\nlint\n\n• Property mode set to 100644\nFile size: 6.1 KB\nRevLine\n[e7b3d7b]1#fcc paracrystal model\n[3271e20]2#note model title and parameter table are automatically inserted\n3#note - calculation requires double precision\n[e7b3d7b]4r\"\"\"\n[b297ba9]5.. warning:: This model and this model description are under review following\n6             concerns raised by SasView users. If you need to use this model,\n7             please email [email protected] for the latest situation. *The\n[da7b26b]8             SasView Developers. September 2018.*\n9\n10Definition\n11----------\n12\n[3c56da87]13Calculates the scattering from a **face-centered cubic lattice** with\n14paracrystalline distortion. Thermal vibrations are considered to be\n15negligible, and the size of the paracrystal is infinitely large.\n16Paracrystalline distortion is assumed to be isotropic and characterized by\n17a Gaussian distribution.\n[3271e20]18\n[d138d43]19The scattering intensity $I(q)$ is calculated as\n20\n21.. math::\n[3271e20]22\n[eb69cce]23    I(q) = \\frac{\\text{scale}}{V_p} V_\\text{lattice} P(q) Z(q)\n[3271e20]24\n[d138d43]25where *scale* is the volume fraction of spheres, $V_p$ is the volume of\n[eb69cce]26the primary particle, $V_\\text{lattice}$ is a volume correction for the crystal\n[d138d43]27structure, $P(q)$ is the form factor of the sphere (normalized), and $Z(q)$\n[3c56da87]28is the paracrystalline structure factor for a face-centered cubic structure.\n[3271e20]29\n[da7b26b]30Equation (1) of the 1990 reference\\ [#CIT1990]_ is used to calculate $Z(q)$,\n31using equations (23)-(25) from the 1987 paper\\ [#CIT1987]_ for $Z1$, $Z2$, and\n32$Z3$.\n[3271e20]33\n[3c56da87]34The lattice correction (the occupied volume of the lattice) for a\n[eb69cce]35face-centered cubic structure of particles of radius $R$ and nearest\n[d138d43]36neighbor separation $D$ is\n[3271e20]37\n[d138d43]38.. math::\n39\n40   V_\\text{lattice} = \\frac{16\\pi}{3}\\frac{R^3}{\\left(D\\sqrt{2}\\right)^3}\n[3271e20]41\n[3c56da87]42The distortion factor (one standard deviation) of the paracrystal is\n[d138d43]43included in the calculation of $Z(q)$\n44\n45.. math::\n[3271e20]46\n[d138d43]47    \\Delta a = gD\n[3271e20]48\n[d138d43]49where $g$ is a fractional distortion based on the nearest neighbor distance.\n[3271e20]50\n[2f0c07d]51.. figure:: img/fcc_geometry.jpg\n[3271e20]52\n[d138d43]53    Face-centered cubic lattice.\n[3271e20]54\n55For a crystal, diffraction peaks appear at reduced q-values given by\n56\n[d138d43]57.. math::\n[3271e20]58\n[d138d43]59    \\frac{qD}{2\\pi} = \\sqrt{h^2 + k^2 + l^2}\n60\n61where for a face-centered cubic lattice $h, k , l$ all odd or all\n62even are allowed and reflections where $h, k, l$ are mixed odd/even\n[3c56da87]63are forbidden. Thus the peak positions correspond to (just the first 5)\n[3271e20]64\n[d138d43]65.. math::\n66\n67    \\begin{array}{cccccc}\n68    q/q_0 & 1 & \\sqrt{4/3} & \\sqrt{8/3} & \\sqrt{11/3} & \\sqrt{4} \\\\\n69    \\text{Indices} & (111)  & (200) & (220) & (311) & (222)\n70    \\end{array}\n[3271e20]71\n[eda8b30]72.. note::\n73\n74  The calculation of $Z(q)$ is a double numerical integral that\n75  must be carried out with a high density of points to properly capture\n[1f159bd]76  the sharp peaks of the paracrystalline scattering.\n77  So be warned that the calculation is slow. Fitting of any experimental data\n[eda8b30]78  must be resolution smeared for any meaningful fit. This makes a triple integral\n79  which may be very slow.\n[3271e20]80\n[eb69cce]81The 2D (Anisotropic model) is based on the reference below where $I(q)$ is\n[3c56da87]82approximated for 1d scattering. Thus the scattering pattern for 2D may not\n[1f159bd]83be accurate particularly at low $q$. For general details of the calculation\n[eda8b30]84and angular dispersions for oriented particles see :ref:orientation .\n85Note that we are not responsible for any incorrectness of the\n[3c56da87]862D model computation.\n[3271e20]87\n[1f65db5]88.. figure:: img/parallelepiped_angle_definition.png\n[d138d43]89\n[404ebbd]90    Orientation of the crystal with respect to the scattering plane, when\n[1f65db5]91    $\\theta = \\phi = 0$ the $c$ axis is along the beam direction (the $z$ axis).\n[3271e20]92\n[eb69cce]93References\n94----------\n[3271e20]95\n[da7b26b]96.. [#CIT1987] Hideki Matsuoka et. al. *Physical Review B*, 36 (1987) 1754-1765\n97   (Original Paper)\n98.. [#CIT1990] Hideki Matsuoka et. al. *Physical Review B*, 41 (1990) 3854 -3856\n99   (Corrections to FCC and BCC lattice structure calculation)\n100\n101Authorship and Verification\n102---------------------------\n[3271e20]103\n[da7b26b]104* **Author:** NIST IGOR/DANSE **Date:** pre 2010\n106* **Last Reviewed by:** Richard Heenan **Date:** March 21, 2016\n[3271e20]107\"\"\"\n108\n[2d81cfe]109import numpy as np\n[e2d6e3b]110from numpy import inf, pi\n[3271e20]111\n[e7b3d7b]112name = \"fcc_paracrystal\"\n113title = \"Face-centred cubic lattic with paracrystalline distortion\"\n[3271e20]114description = \"\"\"\n[e7b3d7b]115    Calculates the scattering from a **face-centered cubic lattice** with paracrystalline distortion. Thermal vibrations\n[3271e20]116    are considered to be negligible, and the size of the paracrystal is infinitely large. Paracrystalline distortion is\n117    assumed to be isotropic and characterized by a Gaussian distribution.\n118    \"\"\"\n[a5d0d00]119category = \"shape:paracrystal\"\n[3271e20]120\n[13ed84c]121single = False\n122\n[3e428ec]124#             [\"name\", \"units\", default, [lower, upper], \"type\",\"description\"],\n125parameters = [[\"dnn\", \"Ang\", 220, [-inf, inf], \"\", \"Nearest neighbour distance\"],\n126              [\"d_factor\", \"\", 0.06, [-inf, inf], \"\", \"Paracrystal distortion factor\"],\n[42356c8]128              [\"sld\", \"1e-6/Ang^2\", 4, [-inf, inf], \"sld\", \"Particle scattering length density\"],\n129              [\"sld_solvent\", \"1e-6/Ang^2\", 1, [-inf, inf], \"sld\", \"Solvent scattering length density\"],\n[9b79f29]130              [\"theta\",       \"degrees\",    60,    [-360, 360], \"orientation\", \"c axis to beam angle\"],\n131              [\"phi\",         \"degrees\",    60,    [-360, 360], \"orientation\", \"rotation about beam\"],\n132              [\"psi\",         \"degrees\",    60,    [-360, 360], \"orientation\", \"rotation about c axis\"]\n[3e428ec]133             ]\n[3e428ec]135\n[925ad6e]136source = [\"lib/sas_3j1x_x.c\", \"lib/gauss150.c\", \"lib/sphere_form.c\", \"fcc_paracrystal.c\"]\n[3271e20]137\n[404ebbd]138def random():\n[b297ba9]139    \"\"\"Return a random parameter set for the model.\"\"\"\n[1511c37c]140    # copied from bcc_paracrystal\n142    d_factor = 10**np.random.uniform(-2, -0.7)  # sigma_d in 0.01-0.7\n[404ebbd]143    dnn_fraction = np.random.beta(a=10, b=1)\n[404ebbd]145    pars = dict(\n146        #sld=1, sld_solvent=0, scale=1, background=1e-32,\n[1511c37c]147        dnn=dnn,\n148        d_factor=d_factor," ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57037425,"math_prob":0.9747323,"size":7718,"snap":"2019-43-2019-47","text_gpt3_token_len":2776,"char_repetition_ratio":0.12444905,"word_repetition_ratio":0.032588456,"special_character_ratio":0.46398032,"punctuation_ratio":0.168,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9873751,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T20:53:24Z\",\"WARC-Record-ID\":\"<urn:uuid:5fefbb78-ea83-4004-96d1-6a6c876842be>\",\"Content-Length\":\"75449\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4771f7f0-007d-4a98-bd37-9480ddd18d0c>\",\"WARC-Concurrent-To\":\"<urn:uuid:32023093-b082-4708-ade3-af5b8478d4c5>\",\"WARC-IP-Address\":\"160.36.200.68\",\"WARC-Target-URI\":\"http://trac.sasview.org/browser/sasmodels/sasmodels/models/fcc_paracrystal.py?annotate=blame&rev=b297ba9e837aada12de87cd46ea7a01bfe549da7\",\"WARC-Payload-Digest\":\"sha1:4AMS5SWGQP6QUZRESVZP4OKVUZMMNDQA\",\"WARC-Block-Digest\":\"sha1:FSFYERCERRRW7CS74DGW7JKQISCL6G2M\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670601.75_warc_CC-MAIN-20191120185646-20191120213646-00549.warc.gz\"}"}
https://www.colorhexa.com/5296a1
[ "# #5296a1 Color Information\n\nIn a RGB color space, hex #5296a1 is composed of 32.2% red, 58.8% green and 63.1% blue. Whereas in a CMYK color space, it is composed of 49.1% cyan, 6.8% magenta, 0% yellow and 36.9% black. It has a hue angle of 188.4 degrees, a saturation of 32.5% and a lightness of 47.6%. #5296a1 color hex could be obtained by blending #a4ffff with #002d43. Closest websafe color is: #669999.\n\n• R 32\n• G 59\n• B 63\nRGB color chart\n• C 49\n• M 7\n• Y 0\n• K 37\nCMYK color chart\n\n#5296a1 color description : Dark moderate cyan.\n\n# #5296a1 Color Conversion\n\nThe hexadecimal color #5296a1 has RGB values of R:82, G:150, B:161 and CMYK values of C:0.49, M:0.07, Y:0, K:0.37. Its decimal value is 5412513.\n\nHex triplet RGB Decimal 5296a1 `#5296a1` 82, 150, 161 `rgb(82,150,161)` 32.2, 58.8, 63.1 `rgb(32.2%,58.8%,63.1%)` 49, 7, 0, 37 188.4°, 32.5, 47.6 `hsl(188.4,32.5%,47.6%)` 188.4°, 49.1, 63.1 669999 `#669999`\nCIE-LAB 58.206, -18.461, -12.464 20.817, 26.178, 37.672 0.246, 0.309, 26.178 58.206, 22.275, 214.025 58.206, -30.03, -15.777 51.165, -16.913, -7.839 01010010, 10010110, 10100001\n\n# Color Schemes with #5296a1\n\n• #5296a1\n``#5296a1` `rgb(82,150,161)``\n• #a15d52\n``#a15d52` `rgb(161,93,82)``\nComplementary Color\n• #52a185\n``#52a185` `rgb(82,161,133)``\n• #5296a1\n``#5296a1` `rgb(82,150,161)``\n• #526fa1\n``#526fa1` `rgb(82,111,161)``\nAnalogous Color\n• #a18552\n``#a18552` `rgb(161,133,82)``\n• #5296a1\n``#5296a1` `rgb(82,150,161)``\n• #a1526f\n``#a1526f` `rgb(161,82,111)``\nSplit Complementary Color\n• #96a152\n``#96a152` `rgb(150,161,82)``\n• #5296a1\n``#5296a1` `rgb(82,150,161)``\n• #a15296\n``#a15296` `rgb(161,82,150)``\nTriadic Color\n• #52a15d\n``#52a15d` `rgb(82,161,93)``\n• #5296a1\n``#5296a1` `rgb(82,150,161)``\n• #a15296\n``#a15296` `rgb(161,82,150)``\n• #a15d52\n``#a15d52` `rgb(161,93,82)``\nTetradic Color\n• #38676e\n``#38676e` `rgb(56,103,110)``\n• #41777f\n``#41777f` `rgb(65,119,127)``\n• #498690\n``#498690` `rgb(73,134,144)``\n• #5296a1\n``#5296a1` `rgb(82,150,161)``\n• #5fa3ae\n``#5fa3ae` `rgb(95,163,174)``\n• #70acb6\n``#70acb6` `rgb(112,172,182)``\n• #81b6bf\n``#81b6bf` `rgb(129,182,191)``\nMonochromatic Color\n\n# Alternatives to #5296a1\n\nBelow, you can see some colors close to #5296a1. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #52a198\n``#52a198` `rgb(82,161,152)``\n• #52a19f\n``#52a19f` `rgb(82,161,159)``\n• #529da1\n``#529da1` `rgb(82,157,161)``\n• #5296a1\n``#5296a1` `rgb(82,150,161)``\n• #528fa1\n``#528fa1` `rgb(82,143,161)``\n• #5289a1\n``#5289a1` `rgb(82,137,161)``\n• #5282a1\n``#5282a1` `rgb(82,130,161)``\nSimilar Colors\n\n# #5296a1 Preview\n\nText with hexadecimal color #5296a1\n\nThis text has a font color of #5296a1.\n\n``<span style=\"color:#5296a1;\">Text here</span>``\n#5296a1 background color\n\nThis paragraph has a background color of #5296a1.\n\n``<p style=\"background-color:#5296a1;\">Content here</p>``\n#5296a1 border color\n\nThis element has a border color of #5296a1.\n\n``<div style=\"border:1px solid #5296a1;\">Content here</div>``\nCSS codes\n``.text {color:#5296a1;}``\n``.background {background-color:#5296a1;}``\n``.border {border:1px solid #5296a1;}``\n\n# Shades and Tints of #5296a1\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #030505 is the darkest color, while #f7fafb is the lightest one.\n\n• #030505\n``#030505` `rgb(3,5,5)``\n• #091112\n``#091112` `rgb(9,17,18)``\n• #101d1f\n``#101d1f` `rgb(16,29,31)``\n• #16292c\n``#16292c` `rgb(22,41,44)``\n• #1d3539\n``#1d3539` `rgb(29,53,57)``\n• #244146\n``#244146` `rgb(36,65,70)``\n• #2a4d53\n``#2a4d53` `rgb(42,77,83)``\n• #315960\n``#315960` `rgb(49,89,96)``\n• #38666d\n``#38666d` `rgb(56,102,109)``\n• #3e727a\n``#3e727a` `rgb(62,114,122)``\n• #457e87\n``#457e87` `rgb(69,126,135)``\n• #4b8a94\n``#4b8a94` `rgb(75,138,148)``\n• #5296a1\n``#5296a1` `rgb(82,150,161)``\nShade Color Variation\n• #5ba0ac\n``#5ba0ac` `rgb(91,160,172)``\n• #68a8b2\n``#68a8b2` `rgb(104,168,178)``\n• #75afb9\n``#75afb9` `rgb(117,175,185)``\n• #82b7bf\n``#82b7bf` `rgb(130,183,191)``\n• #8fbec6\n``#8fbec6` `rgb(143,190,198)``\n• #9cc6cd\n``#9cc6cd` `rgb(156,198,205)``\n• #a9cdd3\n``#a9cdd3` `rgb(169,205,211)``\n• #b6d5da\n``#b6d5da` `rgb(182,213,218)``\n• #c3dce0\n``#c3dce0` `rgb(195,220,224)``\n• #d0e4e7\n``#d0e4e7` `rgb(208,228,231)``\n• #ddebee\n``#ddebee` `rgb(221,235,238)``\n• #eaf3f4\n``#eaf3f4` `rgb(234,243,244)``\n• #f7fafb\n``#f7fafb` `rgb(247,250,251)``\nTint Color Variation\n\n# Tones of #5296a1\n\nA tone is produced by adding gray to any pure hue. In this case, #777b7c is the less saturated color, while #07ccec is the most saturated one.\n\n• #777b7c\n``#777b7c` `rgb(119,123,124)``\n• #6e8285\n``#6e8285` `rgb(110,130,133)``\n• #65898e\n``#65898e` `rgb(101,137,142)``\n• #5b8f98\n``#5b8f98` `rgb(91,143,152)``\n• #5296a1\n``#5296a1` `rgb(82,150,161)``\n• #499daa\n``#499daa` `rgb(73,157,170)``\n• #3fa3b4\n``#3fa3b4` `rgb(63,163,180)``\n• #36aabd\n``#36aabd` `rgb(54,170,189)``\n• #2db1c6\n``#2db1c6` `rgb(45,177,198)``\n• #23b8d0\n``#23b8d0` `rgb(35,184,208)``\n• #1abed9\n``#1abed9` `rgb(26,190,217)``\n• #11c5e2\n``#11c5e2` `rgb(17,197,226)``\n• #07ccec\n``#07ccec` `rgb(7,204,236)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #5296a1 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59447247,"math_prob":0.76181066,"size":3722,"snap":"2021-04-2021-17","text_gpt3_token_len":1685,"char_repetition_ratio":0.119688004,"word_repetition_ratio":0.011090573,"special_character_ratio":0.5601827,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98998076,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-21T08:08:32Z\",\"WARC-Record-ID\":\"<urn:uuid:d7e2a3ad-1212-47a4-8724-c4b54d8be23b>\",\"Content-Length\":\"36326\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2cd21c68-a3ea-4ded-92ab-dedbed54e3a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:9046d996-f690-4b77-b503-fd29392acc0c>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/5296a1\",\"WARC-Payload-Digest\":\"sha1:4DT4IBJVVOQKTA4OQWYQTFJB5JLCATVR\",\"WARC-Block-Digest\":\"sha1:LEGM3Q2RYKSB5AJ6FEB5J4HN7YYCZNEG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039526421.82_warc_CC-MAIN-20210421065303-20210421095303-00004.warc.gz\"}"}
https://forum.solidworks.com/thread/199037
[ "# Bend arc length calculation\n\nSketched Bend 4 feature has Bend radius 0.433mm, Arc length 0.816mm, Bend Angle 90degrees\n\nI tried to find a formula that will return arc length. but couldn't do it.\n\nCan someone suggest what the calculation should be?\n\nQuestion: Given Bend arc radius & angle, calculate arc length. (Radius = r, angle = theta)\n\nr * theta = arc length doesn't work here." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89267385,"math_prob":0.9833482,"size":1352,"snap":"2020-45-2020-50","text_gpt3_token_len":303,"char_repetition_ratio":0.1231454,"word_repetition_ratio":0.0,"special_character_ratio":0.2204142,"punctuation_ratio":0.10294118,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98132235,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-28T22:53:19Z\",\"WARC-Record-ID\":\"<urn:uuid:014c4dc0-26c9-4d49-996b-9bb7aca93d5f>\",\"Content-Length\":\"112085\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7760f99-ae97-4111-89fd-35e9f9561967>\",\"WARC-Concurrent-To\":\"<urn:uuid:45ba72f3-2e3d-4ba2-817f-8b53c7deffd5>\",\"WARC-IP-Address\":\"104.103.49.209\",\"WARC-Target-URI\":\"https://forum.solidworks.com/thread/199037\",\"WARC-Payload-Digest\":\"sha1:6OEOZQFVEBZER52WWLATOJ5F23E5PZ6A\",\"WARC-Block-Digest\":\"sha1:KNMFU36U3UESLTFBNNT4W6OIBUN5Z4J5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141195929.39_warc_CC-MAIN-20201128214643-20201129004643-00288.warc.gz\"}"}
https://onecompiler.com/posts/3spsq6dxq/java-variables
[ "", null, "# Java Variables\n\nVariable holds the value during the program execution. Following is the syntax for declaring a Variable\n\n``type identifier;``\n\nex:\n\n``int i;``\n\nFollowing is the syntax for Initialization\n\n``identifier = value;``\n\nex.\n\n``i = 10;``\n\nYou can do these two things in one line\n\n``int i = 10;``\n\nLet me show you a program to declare & initialize all primitive data types of Java\n\n``````public class PrimiteDataTypes {\n\npublic static void main(String[] args) {\n\nbyte byteVal = 127;\nshort shortVal = 32767;\nint intValue = 2147483647;\nlong longValue = 9223372036854775807L;\n\nfloat floatValue = 1.5F;\ndouble doubleVaue = 12.678;\n\nchar charValue= 'f';\n\nboolean booleanValue = true;\n\n}\n}``````" ]
[ null, "https://onecompiler.com/static/images/logo_v2_small_v2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.58380264,"math_prob":0.9750453,"size":646,"snap":"2020-10-2020-16","text_gpt3_token_len":168,"char_repetition_ratio":0.10280374,"word_repetition_ratio":0.01923077,"special_character_ratio":0.3018576,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98124266,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-01T10:12:30Z\",\"WARC-Record-ID\":\"<urn:uuid:8238e2ec-1288-48e6-b978-88a904e33216>\",\"Content-Length\":\"51118\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ddca1e1-85f4-4686-a23b-732b81d4fbc2>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba48caa8-ec1f-4fcf-97e1-794c6559d50d>\",\"WARC-IP-Address\":\"104.27.150.95\",\"WARC-Target-URI\":\"https://onecompiler.com/posts/3spsq6dxq/java-variables\",\"WARC-Payload-Digest\":\"sha1:VCGXMZOGRP7UW77MR5AIZC53Y7PBAYKK\",\"WARC-Block-Digest\":\"sha1:XPBV6SHKAEUFHGMQX3QWXAWMKV2WIRAP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370505730.14_warc_CC-MAIN-20200401100029-20200401130029-00233.warc.gz\"}"}
https://www.dsprelated.com/freebooks/pasp/Measured_Amplitude_Response.html
[ "#### Measured Amplitude Response\n\nFigure 8.3 shows a plot of simulated amplitude-response measurements at 10 frequencies equally spread out between 100 Hz and 3 kHz on a log frequency scale. The ``measurements'' are indicated by circles. Each circle plots, for example, the output amplitude divided by the input amplitude for a sinusoidal input signal at that frequency . These ten data points are then extended to dc and half the sampling rate, interpolated, and resampled to a uniform frequency grid (solid line in Fig.8.3), as needed for FFT processing. The details of these computations are listed in Fig.8.4. We will fit a four-pole, one-zero, digital-filter frequency-response to these data.9.14", null, "", null, "```NZ = 1; % number of ZEROS in the filter to be designed NP = 4; % number of POLES in the filter to be designed NG = 10; % number of gain measurements fmin = 100; % lowest measurement frequency (Hz) fmax = 3000; % highest measurement frequency (Hz) fs = 10000; % discrete-time sampling rate Nfft = 512; % FFT size to use df = (fmax/fmin)^(1/(NG-1)); % uniform log-freq spacing f = fmin * df .^ (0:NG-1); % measurement frequency axis % Gain measurements (synthetic example = triangular amp response): Gdb = 10*[1:NG/2,NG/2:-1:1]/(NG/2); % between 0 and 10 dB gain % Must decide on a dc value. % Either use what is known to be true or pick something \"maximally % smooth\". Here we do a simple linear extrapolation: dc_amp = Gdb(1) - f(1)*(Gdb(2)-Gdb(1))/(f(2)-f(1)); % Must also decide on a value at half the sampling rate. % Use either a realistic estimate or something \"maximally smooth\". % Here we do a simple linear extrapolation. While zeroing it % is appealing, we do not want any zeros on the unit circle here. Gdb_last_slope = (Gdb(NG) - Gdb(NG-1)) / (f(NG) - f(NG-1)); nyq_amp = Gdb(NG) + Gdb_last_slope * (fs/2 - f(NG)); Gdbe = [dc_amp, Gdb, nyq_amp]; fe = [0,f,fs/2]; NGe = NG+2; % Resample to a uniform frequency grid, as required by ifft. % We do this by fitting cubic splines evaluated on the fft grid: Gdbei = spline(fe,Gdbe); % say `help spline' fk = fs*[0:Nfft/2]/Nfft; % fft frequency grid (nonneg freqs) Gdbfk = ppval(Gdbei,fk); % Uniformly resampled amp-resp figure(1); semilogx(fk(2:end-1),Gdbfk(2:end-1),'-k'); grid('on'); axis([fmin/2 fmax*2 -3 11]); hold('on'); semilogx(f,Gdb,'ok'); xlabel('Frequency (Hz)'); ylabel('Magnitude (dB)'); title(['Measured and Extrapolated/Interpolated/Resampled ',... 'Amplitude Response']); ```\n\nNext Section:\nDesired Impulse Response\nPrevious Section:\nDelay Loop Expansion" ]
[ null, "https://www.dsprelated.com/new2/images/dsp_online_conference_banner_336_280.png", null, "http://www.dsprelated.com/josimages_new/pasp/img1862.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77445924,"math_prob":0.9967472,"size":2522,"snap":"2020-34-2020-40","text_gpt3_token_len":741,"char_repetition_ratio":0.10841938,"word_repetition_ratio":0.020356234,"special_character_ratio":0.30927834,"punctuation_ratio":0.1631068,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986805,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T00:58:03Z\",\"WARC-Record-ID\":\"<urn:uuid:f48cf400-185d-4f53-824f-84accad8eaa3>\",\"Content-Length\":\"29343\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b16ba753-f6eb-45a5-bd52-bec1adfc66a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:88f60eff-e048-48ee-867b-575006b8c4aa>\",\"WARC-IP-Address\":\"69.16.201.59\",\"WARC-Target-URI\":\"https://www.dsprelated.com/freebooks/pasp/Measured_Amplitude_Response.html\",\"WARC-Payload-Digest\":\"sha1:SLUD24PWBI2YFRDX27G3NQH7TVLDVY4D\",\"WARC-Block-Digest\":\"sha1:CN4OEFB3FTEGFOGBJTUNKOH5U4SGUZZ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400221382.33_warc_CC-MAIN-20200924230319-20200925020319-00232.warc.gz\"}"}
http://mathcentral.uregina.ca/QQ/database/QQ.09.12/h/steven1.html
[ "", null, "", null, "", null, "SEARCH HOME", null, "Math Central Quandaries & Queries", null, "", null, "Question from Steven, a student: Consider the graph of the function f(x) = 1/x in the first quadrant, and a line tangent to f at a point P where x = k. Find the slop of the line tangent to f at x = k in terms of k and write an equation for the tangent line l in terms of k.", null, "Hi Steven,\n\nFirst lets try this problem with $k = 3.$\n\nThe function is $f(x) = \\large \\frac{1}{x} = x^{-1}$ so the point $P$ has coordinates $\\left(3, \\large \\frac13\\right).$ The derivative of $f(x)$ is $f^{\\prime}(x) = (-1) x^{-2} = - \\large \\frac{1}{x^2}$ and hence the slope of the tangent to $f(x)$ at $x = 3$ is $f^{\\prime}(3) = - \\large \\frac{1}{3^2} = - \\frac19.$ Thus the tangent line $l$ to $f(x)$ at $x = 3$ has equation\n\n$\\left(y - \\frac13\\right) = - \\frac19 (x - 3)$\n\nNow you try it with $x = k.$\n\nPenny", null, "", null, "", null, "", null, "", null, "Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences." ]
[ null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/search.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/qqsponsors.gif", null, "http://mathcentral.uregina.ca/lid/images/mciconnotext.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78733015,"math_prob":1.0000057,"size":635,"snap":"2022-40-2023-06","text_gpt3_token_len":227,"char_repetition_ratio":0.14104596,"word_repetition_ratio":0.03508772,"special_character_ratio":0.41889763,"punctuation_ratio":0.05645161,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000093,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T23:49:32Z\",\"WARC-Record-ID\":\"<urn:uuid:2e85a8ea-50d6-4066-9c94-95a1cc104b4c>\",\"Content-Length\":\"7192\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa71a23a-d6d3-4212-805d-f5ab6fcdeaca>\",\"WARC-Concurrent-To\":\"<urn:uuid:1fbfd33f-e833-4ded-b866-7add7d8e855c>\",\"WARC-IP-Address\":\"142.3.156.40\",\"WARC-Target-URI\":\"http://mathcentral.uregina.ca/QQ/database/QQ.09.12/h/steven1.html\",\"WARC-Payload-Digest\":\"sha1:EOW56V73VUHK3UNTNDGSAZCIPTZM2RK2\",\"WARC-Block-Digest\":\"sha1:DNFDZFFQ76JNB2SLMA7EHI6P2IXDEY4P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337446.8_warc_CC-MAIN-20221003231906-20221004021906-00526.warc.gz\"}"}
https://inviso.dk/blog/post/predictions-with-alteryx
[ "Tableau\n\n# Predictions with Alteryx\n\nIn this post I will show how to leverage the predictive capabilities of Alteryx. The predictive tools available in Alteryx are accessible as a free downloadable package at //downloads.alteryx.com/.", null, "The package contains a wide series of R based macros that easily can be implemented in an Alteryx workflow.\n\nThese tools differs from the usual workings of Alteryx. Normally the tools used in a workflow are working through the Alteryx engine, but with the predictive tools an R engine is being queried.\n\nFor those unfamiliar with R, R is a programming language used for a variety of computing task where statistic and modelling are amongst the capabilities.\n\nThe model\n\nIn this post I will go through a very simple example of making a predictive model in Alteryx.\n\nBasically this post will be concerned with making a model that can predict the gender of an individual based on height and weight.\n\nWhen developing such a model our first concern should be the characteristics of what we are trying to predict. In this case what we are trying to predict, the dependent variable, is binary. This simply means that the variable only take on two values, male or female.\n\nHaving a binary dependent variable influences our choice of model. In this example we will be using a logistic regression model. We could have chosen a variety of models concerned with binary responses, the logistic model simply being the usual go to model in such matters.", null, "The data\n\nWe will develop the model based on a dataset containing 10.000 observations and 3 variables or fields. The dataset is of the cross-sectional type, meaning that we have observations on an individual level and no information about time. The 3 fields of the data contains information about gender, height and weight.\n\nThe dataset will be available for download at the bottom of the post.", null, "Designing the model\n\nFor simplifications we will go against usual econometric practices and both train and validate the model on the same dataset.\n\nUsually in modelling, you would divide the dataset into different subsets so that the model can be trained on one subset and thereafter validated on another subset of completely new and, to the model, unknown data.\n\nIt should be noted that disregarding this step, inherently will make our model seem better than what might be the reality.\n\nFirstly we will input the Gender.yxdb dataset and through the Predictive pane in Alteryx drag the Logistic Regression tool to the canvas.", null, "We notice that the Logistic Regression tool has one input and two outputs. The input is simply the dataset we want to perform the regression on. The bottom output, the R output, returns a pre fabricated report of the regression when connection a browse tool. The top output, the O output, returns an R object containing the characteristics of the model for further analysis. Note that this R object only can be used with R based tools.\n\nBelow the configurations pane of the Logistic Regression tool is displayed.", null, "First we need to assign a name to the model.\n\nSecondly we need to select the desired target value, this is our dependent variable gender. Thereafter we select the predictor variables, the variables that we want to predict gender based on, in our case Height and Weight.\n\nLastly we have to decide the type of logistical regression. Here we have the option of Logit and Probit. The previous are two different means of making the estimation and differs in mathematical properties. In our example the Logit model type is used.\n\nEconometric modelling is a science of its own with plenty of literature explaining the correct methodological approach, this is however beyond the scope of this post. So for now, we will just happily move forward with the choice of the Logit model.\n\nIf we inspect the output contained in the pre fabricated report we can inspect the significance of the variables included in the model, as well as their correlations with the response variable, in our case gender.\n\nThe model we are developing is quite simple and when considering the significance of the variables, we find that the we have more than desirable levels of significance.", null, "Now we have a model containing a ruleset, so to speak, of how to predict gender based on height and weight.\n\nMaking predictions\n\nThe goal is now to leverage the model in predicting the gender of the individuals in our dataset. This we do through the Score tool which is to be found in the Predictive pane of Alteryx.", null, "The Score tool allows us to assign a probability of being male or female for each observation in our dataset, based on our model.", null, "In opposition to the Logistic Regression tool, the Score tool contains two inputs and only one output. The top input takes in the R object from a model, in our case the Logistic Regression. The bottom input takes in the dataset on which the predictions are wanted.\n\nIn our example we will disregard the configurations pane of the Score tool as we haven't made any modifications to the data, such as oversampling, that we need to make the Score tool aware of.\n\nThe output of the Score tool is the inputted dataset with two attached fields. The dataset now also contains the probability of being either male or female.", null, "Evaluating the model\n\nIn the last step of our very basic example we'll evaluate the predictive accuracy of the model. This can be done in many ways, in this post we will use the Lift Chart tool to evaluate the predictive accuracy.", null, "The Lift Chart tool is like the other tools used in this post to be found in the Predictive pane of Alteryx. The tool is used to measure the captured response of the predictive model. Let's say we are interested in identifying all the males in our dataset, the lift chart tells us how many of the males in the dataset are identified when looking through different proportions of the dataset.", null, "Above we see the output of the Lift Chart tool. The Lift Chart should be looked upon as follows:\n\nWhen looking at 20% of the dataset we’re able of capturing 40% of the males in the data, when looking at 30% of the data our model is able of capturing 60% of the males and so forth.\n\nA neat advantage of the Lift Chart tool is the capability of comparing a variety of models. In that way we're able of identifying the best performing model out of many.\n\nBelow the configurations of the Lift Chart tool can be seen.", null, "Above we simply specify that we want to see a chart of the total cumulative response rate, that the dataset we evaluate contains 50% males and that we are looking for observations of males.\n\nThe information that the dataset contains 50% males is a previously done calculation made with the Summarize tool.\n\nBelow is the entire workflow can be observed.", null, "In conclusion we now have a way of predicting the gender of an individual with only knowledge of their height and weight. The task is quite neat as there are strong correlations between height, weight and gender. When trying to make predictions on more complicated relationships the complexity of the model will of course increase, however this post present the basic framework of making prediction with Alteryx.\n\nBack to blog" ]
[ null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-11.06.32.png", null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-11.47.05.png", null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-16.45.50.png", null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-12.38.01.png", null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-13.06.15.png", null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-13.01.25.png", null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-13.04.18.png", null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-14.00.13.png", null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-14.31.13%20%281%29.png", null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-15.03.03.png", null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-15.47.21.png", null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-16.10.47.png", null, "https://media.inviso.dk/2016-02-01/medium/Screen-Shot-2016-01-08-at-16.15.05.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92358696,"math_prob":0.94996977,"size":7090,"snap":"2020-45-2020-50","text_gpt3_token_len":1414,"char_repetition_ratio":0.14944962,"word_repetition_ratio":0.02124183,"special_character_ratio":0.19294782,"punctuation_ratio":0.0797912,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9751955,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T18:32:48Z\",\"WARC-Record-ID\":\"<urn:uuid:10f0d67d-b6cf-4db1-978c-2df4d70b91fc>\",\"Content-Length\":\"19942\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f84e64c4-7768-420b-99e3-d77731f54e86>\",\"WARC-Concurrent-To\":\"<urn:uuid:357a5d0f-2981-4656-bd64-b57930e359df>\",\"WARC-IP-Address\":\"34.248.250.102\",\"WARC-Target-URI\":\"https://inviso.dk/blog/post/predictions-with-alteryx\",\"WARC-Payload-Digest\":\"sha1:ZFGMAGY3XURWRB3ZB2DR6VLU6TEEIENE\",\"WARC-Block-Digest\":\"sha1:VN6DAG7JUWDUYWSJTH2CII6ALP2WCAHU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911229.96_warc_CC-MAIN-20201030182757-20201030212757-00327.warc.gz\"}"}
https://www.fullversiondl.com/ptc-mathcad-prime-v6-0-0-0/
[ "", null, "PTC Mathcad Prime v6.0.0.0 is one of the software that is used in mathematical engineering calculations and users can easily do, document and share calculations and results of their designs. Math symbols are supported and it can be said that all symbols are written in their own form, making this feature very easy to work with. In addition to acting as a calculator, the software also allows drawing two-dimensional and three-dimensional functions. This software is one of the mathematical software that has many capabilities such as computation of derivatives and integrals (definite and indefinite) and solving algebraic equations and calculating different types of matrices and solving matrix operations and solving equations and equations such as Laplace and Fourier andSee this transformation and is capable of coding and unit conversions.\n\n## Here are some key Features of “PTC Mathcad Prime v6.0.0.0” :\n\n• Easy to use and use\n• Ability to combine text, mathematical symbols, charts, etc.\n• Ability to create sophisticated and professional mathematical functions\n• Display on-page equations in accordance with traditional form of equations\n• Automatic use of units (unit conversion) Automated)\n• Design of 2D and 3D charts\n• Advanced mathematical discovery, display, manipulation, data analysis for designs\n• Experience-based design to optimize processing operations\n• Integration with Truenumbers to access variables without losing their cohesion\n• integration with Ko rnucopia For shorter time calculations\n• Compatible with software such as Pro / ENGINEER, PDMLink, ProductPoint, etc.\n\nSystem Requirement\n\n• Software Requirements\n– Windows 10 (64-bit)\n– Windows 8.1 (64 bit)\n– Windows 8 (64-bit)\n– Windows 7 (64 bit)", null, "" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20300%20190%22%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20125%2070%22%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9005192,"math_prob":0.9711246,"size":1777,"snap":"2021-43-2021-49","text_gpt3_token_len":365,"char_repetition_ratio":0.12182741,"word_repetition_ratio":0.0,"special_character_ratio":0.20258863,"punctuation_ratio":0.0890411,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.973861,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T09:39:18Z\",\"WARC-Record-ID\":\"<urn:uuid:fd29d1b9-3e8c-4277-8c91-86a2c25c170f>\",\"Content-Length\":\"293713\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:458ea620-4a2c-4460-b22b-f51abb290f3f>\",\"WARC-Concurrent-To\":\"<urn:uuid:a1755654-2133-42b8-ae25-0ebe613df8d4>\",\"WARC-IP-Address\":\"172.67.190.115\",\"WARC-Target-URI\":\"https://www.fullversiondl.com/ptc-mathcad-prime-v6-0-0-0/\",\"WARC-Payload-Digest\":\"sha1:WXQD2AH6CYONUB2VB6TPCGMEHCLFRXEQ\",\"WARC-Block-Digest\":\"sha1:4QJ36QRAA6EA5U74YQL2TA5ZPIIF2RFP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363791.16_warc_CC-MAIN-20211209091917-20211209121917-00425.warc.gz\"}"}
https://answers.everydaycalculation.com/simplify-fraction/476-209
[ "Solutions by everydaycalculation.com\n\n## Reduce 476/209 to lowest terms\n\n476/209 is already in the simplest form. It can be written as 2.277512 in decimal form (rounded to 6 decimal places).\n\n#### Steps to simplifying fractions\n\n1. Find the GCD (or HCF) of numerator and denominator\nGCD of 476 and 209 is 1\n2. Divide both the numerator and denominator by the GCD\n476 ÷ 1/209 ÷ 1\n3. Reduced fraction: 476/209\nTherefore, 476/209 simplified to lowest terms is 476/209.\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7609653,"math_prob":0.64250904,"size":443,"snap":"2021-31-2021-39","text_gpt3_token_len":131,"char_repetition_ratio":0.12528473,"word_repetition_ratio":0.0,"special_character_ratio":0.4040632,"punctuation_ratio":0.0952381,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95465267,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-05T06:53:00Z\",\"WARC-Record-ID\":\"<urn:uuid:786d6cdd-73b4-4ac7-a0e9-85e534c6f0f1>\",\"Content-Length\":\"6517\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0c923527-93f4-4fb6-a2c1-aae6197f37a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:b1c98e2b-ff5b-4c7e-ace4-5d56dd0c1e4d>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/simplify-fraction/476-209\",\"WARC-Payload-Digest\":\"sha1:5GNBH3RKUZXAUCELB6VLZY5TJJBL4PG4\",\"WARC-Block-Digest\":\"sha1:QKLB25XSOLEWUIEFS2TX7DI4T27IGR4O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046155458.35_warc_CC-MAIN-20210805063730-20210805093730-00059.warc.gz\"}"}
http://wangcong.org/2012/07/06/treap/
[ "treap\n\ntreap 是一个很有意思的数据结构,从名字也能看得出来,它是 tree 和 heap 的混合产物。为什么会有这么一个数据结构还得从二叉树(BST)说起。\n\n1) 对于搜索,使用二叉树的 key 即可,和普通二叉树没有区别:\n\n[perl]\nsub _get_node {\nmy \\$self = shift;\nmy \\$key = shift;\nwhile(!\\$self->_is_empty() and \\$self->ne(\\$key)){\n\\$self = \\$self->{\\$self->lt(\\$key)?”left”:”right”}\n}\nreturn \\$self->_is_empty() ? 0 : \\$self;\n}\n[/perl]\n\n2) 插入一个新的节点 key=x 时,随机一个整数值 y 作为 priority,利用二叉树搜索 x,在它应该出现的位置创建一个新的节点,只要 x 不是根节点而且优先级高于它的父节点,那么旋转这个节点使其和其父节点交换位置。\n\n[perl]\nsub insert {\nmy \\$self = shift;\nmy \\$key = shift;\nmy \\$data = shift;\n\\$data = defined(\\$data)? \\$data : \\$key;\nmy \\$priority = shift() || rand();\n\nif(\\$self-&gt;_is_empty()) {\n\\$self-&gt;{priority} = \\$priority,\n\\$self-&gt;{key} = \\$key;\n\\$self-&gt;{data} = \\$data;\n\\$self-&gt;{left} = \\$self-&gt;new(\\$self-&gt;{cmp});\n\\$self-&gt;{right} = \\$self-&gt;new(\\$self-&gt;{cmp});\nreturn \\$self;\n}\n\nif(\\$self-&gt;gt(\\$key)){\n\\$self-&gt;{right}-&gt;insert(\\$key,\\$data,\\$priority);\nif(\\$self-&gt;{right}-&gt;{priority} &gt; \\$self-&gt;{priority}){\n\\$self-&gt;_rotate_left();\n}\n}elsif(\\$self-&gt;lt(\\$key)){\n\\$self-&gt;{left}-&gt;insert(\\$key,\\$data,\\$priority);\nif(\\$self-&gt;{left}-&gt;{priority} &gt; \\$self-&gt;{priority}){\n\\$self-&gt;_rotate_right();\n}\n\n}else{\n\\$self-&gt;_delete_node();\n\\$self-&gt;insert(\\$key,\\$data,\\$priority);\n}\nreturn \\$self;\n\n}\n[/perl]\n\n3) 删除一个节点相对比较麻烦:如果要删除的节点 x 是一个叶子,直接删掉即可;如果 x 有一个孩子节点 z,把 x 删掉,然后把 z 作为 x 父亲的孩子;如果 x 有两个孩子节点,则把 x 和它的下一个节点(successor)交换,然后进行相应的旋转。实现是递归实现的,很容易理解。注意:这里实现删除时并没有进行实质的删除操作,而只是把优先级设为了最低的 -100,这也使得代码变得比上面的理论更简单。\n\n[perl]\nsub delete {\nmy \\$self = shift;\nmy \\$key = shift;\nreturn 0 unless \\$self = \\$self->_get_node(\\$key);\n\\$self->_delete_node();\n}\n\nsub _delete_node {\nmy \\$self = shift;\nif(\\$self->_is_leaf()) {\n%\\$self = (priority => -100, cmp => \\$self->{cmp});\n} elsif (\\$self->{left}->{priority} > \\$self->{right}->{priority}) {\n\\$self->_rotate_right();\n\\$self->{right}->_delete_node();\n} else {\n\\$self->_rotate_left();\n\\$self->{left}->_delete_node();\n}\n}\n[/perl]\n\n[perl]\nsub _clone_node {\nmy \\$self = shift;\nmy \\$other = shift;\n%\\$self = %\\$other;\n}\n\nsub _rotate_left {\nmy \\$self = shift;\nmy \\$tmp = \\$self->new(\\$self->{cmp});\n\\$tmp->_clone_node(\\$self);\n\\$self->_clone_node(\\$self->{right});\n\\$tmp->{right} = \\$self->{left};\n\\$self->{left} = \\$tmp;\n\n}\n\nsub _rotate_right {\nmy \\$self = shift;\nmy \\$tmp = \\$self->new(\\$self->{cmp});\n\\$tmp->_clone_node(\\$self);\n\\$self->_clone_node(\\$self->{left});\n\\$tmp->{left} = \\$self->{right};\n\\$self->{right} = \\$tmp;\n}\n[/perl]\n\nB A\n/ /\nA 2 --> 0 B\n/ /\n0 1 1 2" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.50969595,"math_prob":0.9934794,"size":2950,"snap":"2022-05-2022-21","text_gpt3_token_len":1391,"char_repetition_ratio":0.21215206,"word_repetition_ratio":0.0990099,"special_character_ratio":0.35457626,"punctuation_ratio":0.16359918,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96888113,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T08:31:44Z\",\"WARC-Record-ID\":\"<urn:uuid:19366e15-cba3-4cfb-aa7b-0bac2e9a5fa1>\",\"Content-Length\":\"15889\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8f70243e-f229-4840-a10d-4cccab3a0a2e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac97251e-c282-414b-8d1e-9d69fa80b8dc>\",\"WARC-IP-Address\":\"192.30.252.153\",\"WARC-Target-URI\":\"http://wangcong.org/2012/07/06/treap/\",\"WARC-Payload-Digest\":\"sha1:7RUFPEDBUBG276SCXNAU2FV5E2ST64AU\",\"WARC-Block-Digest\":\"sha1:VO7GLFD6JNNRLJUOLB7LIQHPJQH2SQPX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305423.58_warc_CC-MAIN-20220128074016-20220128104016-00298.warc.gz\"}"}
http://electsylviahammond.com/pre-algebra-slope/pre-algebra-slope-grade-math-question-test-bank-review-sampler-for-pre-algebra-slope-of-a-line/
[ "## Pre Algebra Slope Grade Math Question Test Bank Review Sampler For Pre Algebra Slope Of A Line", null, "pre algebra slope grade math question test bank review sampler for pre algebra slope of a line.\n\npre algebra slope of a line worksheets builder measurement height similar triangle test,glencoe pre algebra slope intercept form calculating ladder activity finding from a graph and worksheet answers infinite graphing lines in,math algebra question bank mega bundle for by pre slope test worksheets glencoe intercept form,infinite pre algebra graphing lines in slope intercept form worksheet worksheets introduction to identifying and using,glencoe pre algebra slope intercept form lesson 8 7 worksheet answers worksheets,pre algebra slope test intercept form math vocabulary word wall worksheet answers,algebra builder slope pre write the intercept form of equation each line a test,pre algebra slope intercept form worksheets printable math find the download them or print of a line,algebra 2 slope of a line warm up purple workbook pg pre worksheet answers worksheets and y intercept,pre algebra slope test glencoe intercept form k worksheet school worksheets book letter study site write the of equation each line." ]
[ null, "http://electsylviahammond.com/wp-content/uploads/2018/12/pre-algebra-slope-grade-math-question-test-bank-review-sampler-for-pre-algebra-slope-of-a-line.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81302154,"math_prob":0.8059719,"size":1209,"snap":"2019-51-2020-05","text_gpt3_token_len":221,"char_repetition_ratio":0.2307054,"word_repetition_ratio":0.022346368,"special_character_ratio":0.16211745,"punctuation_ratio":0.054187194,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99934465,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T06:13:24Z\",\"WARC-Record-ID\":\"<urn:uuid:35b55c2e-8883-4a10-a20e-32d092d52a0e>\",\"Content-Length\":\"49081\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d473a0a7-4dc2-4a24-9060-0062e713368e>\",\"WARC-Concurrent-To\":\"<urn:uuid:000b94e4-fc95-480e-9f25-49714528ee79>\",\"WARC-IP-Address\":\"104.24.116.162\",\"WARC-Target-URI\":\"http://electsylviahammond.com/pre-algebra-slope/pre-algebra-slope-grade-math-question-test-bank-review-sampler-for-pre-algebra-slope-of-a-line/\",\"WARC-Payload-Digest\":\"sha1:UV47GXF7RTDFEWUQLW4ZWPA3RC73OHAJ\",\"WARC-Block-Digest\":\"sha1:N5P4T3LOIMCRMFUYUL3Z26LLO6QI52WG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250608295.52_warc_CC-MAIN-20200123041345-20200123070345-00400.warc.gz\"}"}
http://export.arxiv.org/abs/1811.09059
[ "Full-text links:\n\nquant-ph\n\n# Title: Cyclic permutations for qudits in $d$ dimensions\n\nAbstract: One of the main challenges in quantum technologies is the ability to control individual quantum systems. This task becomes increasingly difficult as the dimension of the system grows. Here we propose a general setup for cyclic permutations $X_d$ in $d$ dimensions, a major primitive for constructing arbitrary qudit gates. Using orbital angular momentum states as a qudit, the simplest implementation of the $X_d$ gate in $d$ dimensions requires a single quantum sorter $S_d$ and two spiral phase plates. We then extend this construction to a generalised $X_d(p)$ gate to perform a cyclic permutation of a set of $d$, equally spaced values $\\{ \\ket{\\ell_0}, \\ket{\\ell_0+p},\\ldots, \\ket{\\ell_0+(d-1)p} \\} \\mapsto \\{ \\ket{\\ell_0+p}, \\ket {\\ell_0+2p},\\ldots, \\ket{\\ell_0} \\}$. We find compact implementations for the generalised $X_d(p)$ gate in both Michelson (one sorter $S_d$, two spiral phase plates) and Mach-Zehnder configurations (two sorters $S_d$, two spiral phase plates). Remarkably, the number of spiral phase plates is independent of the qudit dimension $d$. Our architecture for $X_d$ and generalised $X_d(p)$ gate will enable complex quantum algorithms for qudits, for example quantum protocols using photonic OAM states.\n Subjects: Quantum Physics (quant-ph) Journal reference: Scientific Reports 9, 6337 (2019) DOI: 10.1038/s41598-019-42708-7 Cite as: arXiv:1811.09059 [quant-ph] (or arXiv:1811.09059v2 [quant-ph] for this version)\n\n## Submission history\n\nFrom: Radu Ionicioiu [view email]\n[v1] Thu, 22 Nov 2018 08:20:47 GMT (23kb,D)\n[v2] Fri, 19 Apr 2019 11:14:43 GMT (80kb,D)\n\nLink back to: arXiv, form interface, contact." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7637432,"math_prob":0.98859507,"size":1662,"snap":"2019-13-2019-22","text_gpt3_token_len":449,"char_repetition_ratio":0.110373944,"word_repetition_ratio":0.0,"special_character_ratio":0.27376655,"punctuation_ratio":0.13442624,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98050684,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-21T23:26:34Z\",\"WARC-Record-ID\":\"<urn:uuid:b49454fb-b0fc-49b5-9ba4-cdbe6d2d4d4d>\",\"Content-Length\":\"13707\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b5cf081-08b7-479e-954f-f4e71acccd50>\",\"WARC-Concurrent-To\":\"<urn:uuid:99ea3bbf-bd6c-415b-9ac4-4a5c73fc7a43>\",\"WARC-IP-Address\":\"128.84.21.203\",\"WARC-Target-URI\":\"http://export.arxiv.org/abs/1811.09059\",\"WARC-Payload-Digest\":\"sha1:EENZCCGE43KAOZCL34RMDTSIZLJEW5ZJ\",\"WARC-Block-Digest\":\"sha1:BKQ2BDXZCOVRCW3QYBQMENY7JZ5YBYHP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256586.62_warc_CC-MAIN-20190521222812-20190522004812-00015.warc.gz\"}"}
http://patilv.com/2014/06/17/ted-talks/
[ "# Frequent Speakers at Ted and Word Cloud of Talk Titles\n\nA recent article in openculture.com by Dan Colman mentioned that there was a list of 1756 Ted Talks maintained by “someone” in a spreadsheet format. A link to this sheet can also be found on this page on Wikipedia. It was titled “Ted Talks as of 5/23/2014”. I downloaded that spreadsheet on 6/12/2014 from this link and saved that as a csv file. It turned out to be a list of 1755 talks. Here, I make a wordcloud of the titles of these talks and a few ggplots to identify speakers with 3 or more appearances using Karthik Ram’s Wes Anderson palette for R. The code and data for this post can be found on my github site at this link.\n\n``````library(stringr)\nlibrary(tm)\nlibrary(wordcloud)\nlibrary(wesanderson)\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(gridExtra)\ncolnames(ted)\n``````\n``````## \"URL\" \"ID\" \"URL.1\" \"Speaker\"\n## \"Name\" \"Short.Summary\" \"Event\" \"Duration\"\n## \"Publish.date\"\n``````\n\nColumns of interest in this study are the names of the “Speaker”, titles of the talk - “Name” column, and Duration of the talk - the “Duration” column. Latter titles seem to have the first and last names of the speakers at beginning . Upon going through that column, I realized that this practice began with the 424th entry. To be safe, let’s remove the first two words from that point on.\n\n# Some Cleaning\n\nIn this section, we remove the first two words from the 424th entry. We then clean up the text by removing some punctuations, extra spaces, and any URLs that may be present.\n\n``````ted\\$Name <- as.character(ted\\$Name)\nfor (i in 424:nrow(ted)) {\nted\\$Name[i] <- word(ted\\$Name[i], 3, -1)\n} # Removing first two words from row 424 onwards\n\n# Function to clean text ## from Gaston Sanchez's work\nclean.text <- function(x) {\n# to lowercase\nx <- tolower(x)\n# remove punctuation marks\nx <- gsub(\"[[:punct:]]\", \"\", x)\n# remove numbers\nx <- gsub(\"[[:digit:]]\", \"\", x)\n# remove tabs and extra spaces\nx <- gsub(\"[ |\\t]{2,}\", \"\", x)\n# remove blank spaces at the beginning\nx <- gsub(\"^ \", \"\", x)\n# remove blank spaces at the end\nx <- gsub(\" \\$\", \"\", x)\n# result\nreturn(x)\n}\n``````\n\n# Word cloud of popular words in Titles\n\n``````myCorpus <- Corpus(VectorSource(clean.text(ted\\$Name)))\nmyStopwords <- c(stopwords(\"english\"), \"ted prize wish\")\n# The latter was required because there were few titles with that\n# phrase\n\nmyCorpus <- tm_map(myCorpus, removeWords, myStopwords)\ntdmpremat <- TermDocumentMatrix(myCorpus)\ntdm <- as.matrix(tdmpremat)\nsortedMatrix <- sort(rowSums(tdm), decreasing = TRUE)\ntdmframe <- data.frame(word = names(sortedMatrix), freq = sortedMatrix)\n\n# plotting words that appear at least 5 times\nwordcloud(tdmframe\\$word, tdmframe\\$freq, random.order = FALSE, random.color = FALSE,\nmin.freq = 5, scale = c(5, 0.2), colors = wes.palette(5, \"Darjeeling\"))\n``````", null, "# Speakers with more than 2 appearances and mean duration of their talks\n\n``````numtalks <- data.frame(table(ted\\$Speaker))\ntable(numtalks\\$Freq)\n``````\n``````##\n## 1 2 3 4 5 6 9\n## 1301 130 40 11 3 1 1\n``````\n\nThere were 1487 different speakers. 1301 of them gave one talk, whereas 130 of them had given two talks. Below, I will focus on only those people who have given more than 2 talks, which is a list of 56 people.\n\nLet’s first deal with the duration of talk variable. Here, we compute the mean duration of talks for this group of 56.\n\n``````# Function adapted from\n# http://stackoverflow.com/questions/5186972/how-to-convert-time-mmss-to-decimal-form-in-r\nted\\$TalkTime <- sapply(strsplit(as.character(ted\\$Duration), \":\"), function(x) {\nx <- as.numeric(x)\nx * 60 + x + x/60\n})\n\nspeakfreqandduration <- ted %>% group_by(Speaker) %>% summarise(NumTalks = n(),\nMean.Talk.Time = mean(TalkTime, na.rm = TRUE)) %>% filter(NumTalks >\n2)\n\nsummary(speakfreqandduration\\$Mean.Talk.Time)\n``````\n``````## Min. 1st Qu. Median Mean 3rd Qu. Max.\n## 3.03 11.60 15.50 14.90 17.20 36.30\n``````\n\nThe mean and median talk times are around 15 minutes. There are, of course, talks that are longer and shorter than those. More on those in the charts below.\n\n``````gg1 <- ggplot(speakfreqandduration, aes(x = NumTalks, fill = as.factor(NumTalks))) +\nscale_x_continuous(breaks = 1:10) + geom_histogram() + xlab(\"Number of talks\") +\nggtitle(\"Number of talks\") + scale_fill_manual(values = wes.palette(5,\n\"Darjeeling2\")) + theme_bw() + theme(legend.position = \"none\")\n\ngg2 <- ggplot(speakfreqandduration, aes(x = Mean.Talk.Time, fill = as.factor(NumTalks))) +\nscale_x_continuous(breaks = c(5, 10, 15, 20, 25, 30, 35, 40, 45, 50,\n55, 60, 65, 70)) + geom_histogram() + scale_fill_manual(values = wes.palette(5,\n\"Darjeeling2\")) + ggtitle(\"Mean talk time\") + theme_bw() + theme(legend.position = \"none\") +\nscale_y_continuous(breaks = 1:10)\n\ngg3 <- ggplot(speakfreqandduration, aes(x = reorder(Speaker, Mean.Talk.Time),\ny = Mean.Talk.Time, fill = as.factor(NumTalks))) + scale_fill_manual(values = wes.palette(5,\n\"Darjeeling2\"), name = (\"Number of\\nTalks\")) + geom_bar(stat = \"identity\") +\nxlab(\"Speaker\") + theme_bw() + theme(axis.text.y = element_text(size = 8),\naxis.title.y = element_blank()) + coord_flip() + ggtitle(\"Speakers, mean talk time, and number of talks\")\n\ngrid.arrange(gg3, arrangeGrob(gg1, gg2, ncol = 1), ncol = 2, widths = c(2,\n1))\n``````", null, "• Hans Rosling has 9 appearances, the highest in this list, followed by Marco Tempest, who has 6 appearances.\n• Assuming that many talks were scheduled for 15 mins, none of the 56 speakers had a mean talk time of around 14 mins. They all stretched their talk to either take up the whole 15 mins or perhaps, extended their talk a bit. Interesting stuff." ]
[ null, "http://patilv.com/img/2014-6-17-Ted-Talks/unnamed-chunk-3.png", null, "http://patilv.com/img/2014-6-17-Ted-Talks/unnamed-chunk-6.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8127254,"math_prob":0.973482,"size":5619,"snap":"2019-51-2020-05","text_gpt3_token_len":1620,"char_repetition_ratio":0.09439003,"word_repetition_ratio":0.03321471,"special_character_ratio":0.30895177,"punctuation_ratio":0.16201118,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96159714,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T11:06:43Z\",\"WARC-Record-ID\":\"<urn:uuid:1447d004-b627-4716-983c-ac3ce791bcbf>\",\"Content-Length\":\"15552\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f9b7b19b-a670-43c4-afdb-cbfca143fffb>\",\"WARC-Concurrent-To\":\"<urn:uuid:bbbe42e2-61f7-4573-8591-1c931a56d903>\",\"WARC-IP-Address\":\"192.30.252.153\",\"WARC-Target-URI\":\"http://patilv.com/2014/06/17/ted-talks/\",\"WARC-Payload-Digest\":\"sha1:ZV4C5DERVMLYLHKC6CSUXWJRFHZEZG4S\",\"WARC-Block-Digest\":\"sha1:UTDHX66VCEUW6BIFWIMTZAUPD2NFA2AB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251688806.91_warc_CC-MAIN-20200126104828-20200126134828-00325.warc.gz\"}"}
https://crypto.stackexchange.com/questions/71947/for-a-hashing-function-like-md5-how-similar-can-two-plaintext-strings-be-and-st/71948
[ "# For a hashing function like MD5, how similar can two plaintext strings be and still generate the same hash?\n\nWhen I say similar, I'm referring to the Hamming distance, the Levenshtein distance, or a similar string distance metric that measures how similar or dissimilar two strings are.\n\nFor instance, are there two plaintext strings with a Levenshtein distance of 1 which share the same MD5 hash? If not, do we know the smallest Levenshtein distance possible for a pair of strings which share the same MD5 hash? Is it even possible to determine this with certainty?\n\nI'm asking about MD5 since it's a well-known and simplistic hash. But I'd love to know how this applies to SHA-2, bcrypt, or other common hash functions.\n\nThere's a similar question which is looking for the shortest length strings that can generate a collision, but I'm looking for the smallest string distance to generate a collision. The actual length of the source strings isn't important.\n\n(Asking purely out of curiosity; I don't have a real-world use for this)\n\n• This is an interesting question, but I'm not sure what utility it could have. MD5 in particular isn't generally broken by knowing some part of the plaintext and trying variations of it; it's such a fast and parallelizable function that you can just try out hundreds of millions of random strings until you get one that matches. If you have any particular purpose beyond just curiosity (which I can understand, and doesn't make this a bad question) then you should mention it, because it might help us give a better answer with more relevant information.\n– Nic\nJul 9 '19 at 4:41\n• @NicHartley Good call, thanks. I've edited in my reason for asking. Long story short: I'm just curious :)\n– John Ellmore\nJul 9 '19 at 4:49\n• Given pigeonhole principle, I'd be surprised if the answer to this isn't 1-bit. Jul 9 '19 at 5:45\n• This may be very difficult to determine, from what I've read MD5 has a very good avalance effect. en.wikipedia.org/wiki/Avalanche_effect Jul 9 '19 at 15:13\n• @LieRyan: Nope, see AleksanderRas's answer and its comments. If you have N pigeonholes and M>N pigeons, at least one pigeonhole must have >1 pigeon, but you can still have M-1 empty pigeonholes. Jul 9 '19 at 15:17\n\nThis answer is based on the work by AleksanderRas, although my conclusion is different.\n\nFirst, to lay out a definition, a hash is a function that takes an arbitrary length input to a fixed length output. For example, MD5 takes any input and produces a 128 bit output.\n\nA cryptographic hash is a hash function which has certain additional security properties.\n\nBecause a hash function takes an arbitrary length input and produces a fixed length output, it is guaranteed that there are some inputs which produce the same outputs. These are collisions.\n\nFinally, the Hamming distance is the number of bits by which two inputs of the same length differ.\n\nFor any hash function, whether or not it is a cryptographic hash function, there are inputs with a Hamming distance of 2 which collide. This can also be shown by the pigeonhole principle:\n\n• Suppose that the hash function returns an n bit output.\n• There are 2n possible outputs.\n• Consider a string B which is 2n + 1 bits long.\n• Consider then the set of all strings which differ from B in exactly one bit. There are 2n + 1 such strings.\n• The Hamming distance between any two different strings in this set is 2: a 1 bit change to get back to B and a second 1 bit change to get to the other string in the set.\n• Because there are more strings in this set than there are possible output hashes, at least two strings must share a hash.\n• Therefore the hash function has a 2 bit difference collision.\n\nIt is possible to construct a hash function which does not have any collisions between strings with Hamming distance of 1. This can be shown as follows:\n\n• Consider a string B\n• Consider a string C which has Hamming distance of 1 from B.\n• The parity of B must be different from the parity of C. That is, if there are an odd number of bits set in B, there must be an even number in C and vice versa.\n• Therefore any hash function which directly encodes the parity of the input, such as regular MD5 with the parity bit appended, will have a minimum Hamming distance of 2.\n\nThere are less trivial hash functions than the parity one which have a minimum collision hamming distance of 2. For example, CBC-MAC is a family of algorithms which encrypts a bitstring with a fixed key under CBC mode, and returns the last block. This meets the definition of a hash function: it takes an arbitrary length input and returns an output fixed at the size of the block. Although (like all hash functions) CBC-MAC is vulnerable to collisions, it cannot have a collision if all changes occur within a single block. (This property comes from the fact that it is an encryption function and therefore a permutation, but further elaboration would be off topic) Since a hamming distance of 1 corresponds to a single bit change, and that single bit change is necessarily in just one block, it cannot cause a collision.\n\nThis should not be taken to mean that the smallest Hamming distance between collision inputs for every hash function is 2. There are functions with a minimum Hamming distance of 1: for example, the trivial hash function truncate. That is, given an n bit hash function which simply drops all but the first n bits, varying bit n+1 will (because it is ignored by the algorithm) give a collision.\n\nSo, when it comes to particular hash functions, the answer could be 1 or 2.\n\nOthers have argued that for MD5 and other standard cryptographic hash functions it will probably be 1. This is a purely probabilistic argument, but in the absence of evidence to the contrary it is a reasonable to use probability with hash functions which are designed to behave randomly.\n\n• I think this explanation is both the most comprehensive and the easiest-to-follow. +1\n– John Ellmore\nJul 10 '19 at 19:52\n\nThe answer is 1 bit (Hamming-distance = 1) for any cryptographic hash algorithm.\n\nThere are definitely collisions, since the digest of the MD5 algorithm is always 128 bits long but there are more than 2128 possible inputs.\n\nWe can explain this due to the Pigeonhole principle.\n\n## Mathematical explanation\n\nLet's say we take an input message of 3 bits:\n\nThere are 8 possibilities in total, because 23 = 8:\n\n• 000\n• 001\n• 010\n• 011\n• 100\n• 101\n• 110\n• 111\n\nSo for an input length of n bits we have 2n possible values.\n\nIf you take the first bit-string as an example (000) you can easily see that there are three possibilities that have a Hamming-distance of 1 (001, 010, 100)\n\nIn theory you could just take a bit-string of length 2129 where all bits are zeros (000...000). We hash this bit-string and call it A. Then replace the first zero with 1 (000...001) and look for a collision with A, if not replace the second zero with 1 (000...010), and so on. This will definitely give you a hash collision since 2129 > 2128 (you have 2129 possible inputs but only 2128 possible outputs). This is the simplest example I can think of (although it would take far too long to achieve this).\n\nNote that this is the case if the assumption holds up that MD5 is a perfect hash function (and it definitely isn't). In practice we could perform this experiment with far less than 2129 bits and expect a collision.\n\nNote also that you can't be sure to get every possible hash output with the procedure explained above. The pigeon principle only says that there are at least some collisions. There could be a hash value that doesn't correspond with any input, i.e. there is no input that can generate the hash value of 128 bits of zeros (000...000). We have the assumption that every hash value is possible but we can't prove it.\n\nThe same experiment could in theory also be performed with other hash functions (MD5, SHA1, SHA2, etc.) if we accept that there really is no limit of inputs (apparently there is an input-size limit). You would just have to change the length of the possible hashes for the experiment. It would even apply to a perfect hash function.\n\n• Although there is a wonderful elegance to this explanation, I am not convinced that it shows a 1 bit bound. I think it shows a 2 bit bound. Yes, the pigeonhole principle says that there must be some collision in the 2^129 hashes of possible onehot strings, but two different onehot strings differ in 2 bits. Jul 9 '19 at 8:44\n• For a counterexample, consider a function which maps all strings with an even number of bits hot to A and all with an odd number of bits hot to B. Of course there are an appalling number of collisions, but none from single bit difference strings. (this function is of course not MD5, so md5 may indeed have a single bit difference collision.) Jul 9 '19 at 8:48\n• I understand your argument, but it doesn't hold. The pigeonhole principle does not guarantee that there is a collision between the hash of A and that of any of the onehot strings. Jul 9 '19 at 9:15\n• To extend the counterexample into more of a hash, consider an algorithm which finds the regular md5 and then appends a 1 if the parity of the input string is odd and appends a zero otherwise. That means there cannot be a collision between two input strings with different parity. Jul 9 '19 at 9:23\n• I also don't understand why this is so highly up voted. This proof is straight up wrong and it's easy to find counterexamples as already stated Jul 10 '19 at 5:06\n\nThere are two answers to this: one practical, and one theoretical.\n\nFirst, the practical one: MD5 is a broken hash function, and we know of collisions for it, and a quick web search turned up a collision with a hamming distance of 6.\n\nSecond, the theoretical one: Most cryptographic hash functions are designed to be a reasonable approximation of a random function (this isn't usually the definition you see in textbooks, but it's an important design goal, due to how hashes are used in practice). MD5 turns out to be a poor approximation of a random function (because it's known to be broken), but let's assume it's not.\n\nIf you take some random binary data, and a random neighbour (hamming distance 1), there's a one in 2^128 chance that there'll be a collision. Simply because there's a one in 2^128 chance of any other piece of data being a collision. That's very unlikely, but you can try again with a different piece of data and its neighbour. Every time you try, you've got a 1 in 2^128 chance of finding a collision, so if you keep trying forever (which is a very long time), you're almost certain to find a collision with a neighbour.\n\nSo the theoretical minimum collision distance is 1, and we suspect such a collision exists.\n\nBut in practice, the time you'd need to take to find this collision is prohibitively large (larger than the age of the universe). Indeed, in a well-designed cryptographic hash function, the time taken to find a collision at all (i.e, not limited to a neighbour) should be prohibitively large.\n\nWe shouldn't be able to find any collisions in MD5 at all, in a reasonable amount of time. The fact that we can, is why we say it is broken.\n\n• \"so if you keep trying forever (which is a very long time), you're almost certain to find a collision with a neighbour\" does not follow, because there are at most 2^l neighbors of a length-l input. In the random model, you'd need an input of astronomical length on the order of 2^128 to have a high probability of finding a neighboring collision. Jul 9 '19 at 18:11\n• @R.. Nah, you can make 2^128 pairs differing in one bit with only 129 bits of input: take any 128-bit string followed by 0 and pair it with the same string followed by 1. One in 2^128 pairs collide given 128-bit outputs, so you should be able to make short collisions with one-bit differences in inputs (if you had forever, of course). I added an answer that tries to spell it out more. Jul 9 '19 at 18:53\n• @twotwotwo: I'm assuming a fixed input you want to have a neighboring collision with. Jul 9 '19 at 19:20\n• I don't see that restriction in the question: it says \"are there two plaintext strings with a Levenshtein distance of 1 which share the same MD5 hash\", not \"is there a collision with a distance of 1 from my fixed example string\". It's sort of like the difference between a collision attack and a second-preimage attack. I think here, like in a collision attack, you can choose both inputs. Jul 9 '19 at 19:26\n• (Slight rev. to my earlier comment: I don't think 129 bits is the minimum to get 2^128 pairs differing by a bit--maybe it's 123?--just the suffixes trick is easy to describe and makes it easy to count pairs.) Jul 9 '19 at 21:16\n\nAn important aspect of cryptographic hash functions is that even the smallest difference in input usually results in different output. But given the unlimited input space compared to the limited output space of the cryptographic hash it is likely that sequences with only small differences (like a single bit) but the same hash value exist.\n\nBut for a more reliable statement and maybe some math behind it I recommend to ask at crypto.stackexchange.com.\n\nWe can prove an upper bound of 2 bits (Hamming distance = 2) for any algorithm\n\n## Upper bound\n\nThis upper bound is for hashing algorithms whose output is a bit string of length 128 (like MD5). It can be generalised by replacing 128 with n\n\nLet A be any bit-string of length 2128.\n\nLet S be the set of A and all its neighbors. Here a neighbor is a String that differs from A in exactly one place.\n\nSince there are 2128 bits in the string, |S| = 2128+1\n\nThe Pigeonhole principle tells us that any hashing algorithm whose output is a 128 bits long string, must have at least one collision on the set S (the number of different strings is 2128).\n\nSince the Hamming distance between any two elements in S is at most 2, we have proven an upper bound.\n\n## Lower bound\n\nWe can prove a lower bound of 2 for a hash function that optimizes the minimum Hamming distance between collisions.\n\nIntuition\n\nConsider a hashing function that outputs the parity of an input string. This hashing function will not have any collisions on neighboring input strings. A hashing algorithm that optimizes the minimal distance between collisions will be at least as good.\n\nGraph theory\n\nWith a bit of knowledge about graph coloring and bipartite graphs, we can create a slightly more formal proof.\n\nConsider an undirected graph G . Its nodes are the binary input strings of some length n , and there is an edge between two nodes if and only if the input strings have a Hamming distance of 1. The color of the node will correspond to its hash.\n\nSince the parity of a binary string is always different from its neighbors, this graph must be bipartite.\n\nA bipartite graph can be colored with two colors such that no neighbors share a color. What this means is that a hashing function with at least one bit of output (two options), can avoid a collision between any neighboring input strings.\n\n## Conclusion\n\nWe have proven that for MD5, and any other hashing algorithm, there exist two input strings with a Hamming distance of at most 2, that will cause a collision.\n\nFor a hashing algorithm that maximizes this distance, we can prove a lower bound of 2.\n\n• \"It can be genearlized by replacing 128 with n\" ... goes on to use 128 the whole way. Nice. I mean, yes, AFAICT the proof holds, so +1, but I just find that a bit silly. Also, why doesn't that intuition work? Under our definition of neighbor (exactly one bit flipped) you can't maintain parity, so if maintaining parity is required to collide, then neighbors can't possibly collide. With your upper bound earlier, that's the proof. Or is that not mathematically rigorous enough? (Genuine question, there, I'm rather awful at proofs)\n– Nic\nJul 10 '19 at 18:38\n• @Nic Making an assumption \"without loss of generality\" is pretty common in math proofs or at least explanations (there's even a wiki page for it, weirdly enough). You can prove something for a special case and then show how to generalize it for all cases your proof is just as valid as if you'd done it more generally - but might be easier to follow.\n– Voo\nJul 13 '19 at 10:06\n\nMD5 and SHA-1 are badly broken functions. But you can think about an abstract good cryptographic hash function, and pretend it generates a different random number of some length for each different input, and model the collisions you'd expect that way.\n\nThe XOR of two random hashes is another random number of the same length. So you can generate a random number, the length of your hash function's output, by picking a string s and XORing hash(s followed by byte 0x00) with hash(s followed by byte 0x01).\n\nWhen the number you get from that XOR is 0, you have a collision. Now, try all 2^128 16-byte strings as s, and do the XOR of two hashes as above. One of the 2^128 128-bit random numbers you get will be zero, more likely than not--I think the probability is (very close to) 1-1/e.\n\nIf you get unlucky and don't get a collision, you try a few more times with 0x00 and 0x01 replaced by a different pair of suffixes that differ in one bit (e.g. 0x02 and 0x03, or multiple bytes when you run out of one-byte pairs). As you try more times, the chance you still don't get any collisions from a random-ish hash drops exponentially.\n\nYou can model it more precisely than that, and fill more details in. But I hope that's enough to intuitively suggest that a good hash will probably have a colliding pair of inputs that only differ by one bit and aren't much longer than the hash's output.\n\nThere isn't much you can do with that since you can't try 2^128 inputs to a hash; we set output lengths specifically to make those searches impossible. Fun to see that examples like that ought to exist out there, though.\n\nSimple answer: MD5 is a finite set, meaning that since an MD5 is 32 characters long, made up of HEX characters, you could literally write out or calculate every combination. The input set however is infinite, there is no limit to the things that could be put into an MD5 hash. With an infinite input set and a finite output set, there must be overlap from different inputs.\n\n• This does not answer the question which is not about whether there are colliding inputs, but whether it can be determined how similar those inputs might be. Jul 10 '19 at 0:34" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91397107,"math_prob":0.9206554,"size":3594,"snap":"2021-43-2021-49","text_gpt3_token_len":775,"char_repetition_ratio":0.1551532,"word_repetition_ratio":0.013867488,"special_character_ratio":0.21229827,"punctuation_ratio":0.08794326,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98657197,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T06:54:35Z\",\"WARC-Record-ID\":\"<urn:uuid:d82aca3d-e56d-412c-8199-ebabe92fa4e9>\",\"Content-Length\":\"254602\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54f66602-a2f6-41c7-9486-65020fd7b0e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc0facb6-d64c-48e9-9b4f-fab02f27702a>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/71947/for-a-hashing-function-like-md5-how-similar-can-two-plaintext-strings-be-and-st/71948\",\"WARC-Payload-Digest\":\"sha1:EELQSK7WLCYB7G6OCSN66SKDB4LZGE4N\",\"WARC-Block-Digest\":\"sha1:S7ICISPZRPTHWKX3BC6SB2VR5KKIRTUL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585911.17_warc_CC-MAIN-20211024050128-20211024080128-00625.warc.gz\"}"}
https://whatisconvert.com/47-feet-in-millimeters
[ "# What is 47 Feet in Millimeters?\n\n## Convert 47 Feet to Millimeters\n\nTo calculate 47 Feet to the corresponding value in Millimeters, multiply the quantity in Feet by 304.8 (conversion factor). In this case we should multiply 47 Feet by 304.8 to get the equivalent result in Millimeters:\n\n47 Feet x 304.8 = 14325.6 Millimeters\n\n47 Feet is equivalent to 14325.6 Millimeters.\n\n## How to convert from Feet to Millimeters\n\nThe conversion factor from Feet to Millimeters is 304.8. To find out how many Feet in Millimeters, multiply by the conversion factor or use the Length converter above. Forty-seven Feet is equivalent to fourteen thousand three hundred twenty-five point six Millimeters.\n\n## Definition of Foot\n\nA foot (symbol: ft) is a unit of length. It is equal to 0.3048 m, and used in the imperial system of units and United States customary units. The unit of foot derived from the human foot. It is subdivided into 12 inches.\n\n## Definition of Millimeter\n\nThe millimeter (symbol: mm) is a unit of length in the metric system, equal to 1/1000 meter (or 1E-3 meter), which is also an engineering standard unit. 1 inch=25.4 mm.\n\n## Using the Feet to Millimeters converter you can get answers to questions like the following:\n\n• How many Millimeters are in 47 Feet?\n• 47 Feet is equal to how many Millimeters?\n• How to convert 47 Feet to Millimeters?\n• How many is 47 Feet in Millimeters?\n• What is 47 Feet in Millimeters?\n• How much is 47 Feet in Millimeters?\n• How many mm are in 47 ft?\n• 47 ft is equal to how many mm?\n• How to convert 47 ft to mm?\n• How many is 47 ft in mm?\n• What is 47 ft in mm?\n• How much is 47 ft in mm?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87120324,"math_prob":0.9682267,"size":1585,"snap":"2022-27-2022-33","text_gpt3_token_len":413,"char_repetition_ratio":0.20999368,"word_repetition_ratio":0.072164945,"special_character_ratio":0.27760252,"punctuation_ratio":0.11940298,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99495983,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-10T09:00:34Z\",\"WARC-Record-ID\":\"<urn:uuid:f5cc0886-d3da-4e51-9ec3-43ffc618b197>\",\"Content-Length\":\"29087\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8a3c9ff0-11a5-43c4-96f1-e5c83855d459>\",\"WARC-Concurrent-To\":\"<urn:uuid:952f56e2-d6e4-4811-99db-c23dde840be5>\",\"WARC-IP-Address\":\"172.67.133.29\",\"WARC-Target-URI\":\"https://whatisconvert.com/47-feet-in-millimeters\",\"WARC-Payload-Digest\":\"sha1:ERNIJQ5IMJKCW4WPOY37BPWESRTA745R\",\"WARC-Block-Digest\":\"sha1:BQY4GW375KJ3OES6ZF7FICXVU2IKQOOA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571150.88_warc_CC-MAIN-20220810070501-20220810100501-00779.warc.gz\"}"}
https://www.studypool.com/discuss/198611/7x-2-1-3-7x-5-1-3-3-1
[ "", null, "# (7x-2)^(1/3) + (7x+5)^(1/3) = 3", null, "Anonymous\ntimer Asked: Jul 22nd, 2014\naccount_balance_wallet \\$5\n\n### Question Description\n\n(7x-2)^(1/3) + (7x+5)^(1/3) = 3\n\nBilal_Mursaleen\nSchool: New York University", null, "", null, "", null, "(7x-2)^(1/3) + (7x+5)^(1/3) = 3\n\n∛(7x-2) + ∛(7x+5) = 3\n\nLet      a = ∛(7x-2)        b = ∛(7x+5)\n\nWe thus have    b³ = (7x+5) = 7x -2 +7 = a³ + 7\n\nSo                  a + b = 3\n\nb = 3 - a ,then\n\nb³ = (3 - a)³ = a³ + 7\n\n3³ - a³ -3(3)(-a)(3 - a) = a³ + 7\n\n27 - a³ +9a(3 - a) = a³ + 7\n\n27 - a³ + 27a - 9a² = a³ + 7\n\n2a³ - 9a² + 27a -20 = 0\n\n2a³ - 2a² - 7a² + 7a + 20a - 20 = 0\n\n2a²(a - 1) - 7a(a - 1) +20(a - 1) = 0\n\n(a - 1)(2a² - 7a +20) = 0\n\n(a - 1)(a² - 7/2a +10) = 0\n\n(a - 1)(a² – 2×a×(7/4) + (7/4)² – 49/16 + 160/16) = 0\n\n(a - 1)[(a – 7/4)² + 111/16] = 0\n\nThus the only real solution is a = 1\n\nthen       a = ∛(7x - 2) = 1\n\na³ = (7x - 2) = 1\n\n7x = 3\n\nHENCE   x = 3/7\n\nflag Report DMCA", null, "", null, "Review", null, "Anonymous\nThank you! Reasonably priced given the quality not just of the tutors but the moderators too. They were helpful and accommodating given my needs.", null, "Brown University\n\n1271 Tutors", null, "California Institute of Technology\n\n2131 Tutors", null, "Carnegie Mellon University\n\n982 Tutors", null, "Columbia University\n\n1256 Tutors", null, "Dartmouth University\n\n2113 Tutors", null, "Emory University\n\n2279 Tutors", null, "Harvard University\n\n599 Tutors", null, "Massachusetts Institute of Technology\n\n2319 Tutors", null, "New York University\n\n1645 Tutors", null, "Notre Dam University\n\n1911 Tutors", null, "Oklahoma University\n\n2122 Tutors", null, "Pennsylvania State University\n\n932 Tutors", null, "Princeton University\n\n1211 Tutors", null, "Stanford University\n\n983 Tutors", null, "University of California\n\n1282 Tutors", null, "Oxford University\n\n123 Tutors", null, "Yale University\n\n2325 Tutors" ]
[ null, "https://www.facebook.com/tr", null, "https://www.studypool.com/img/systemavatars/student-pre10.jpg", null, "https://www.studypool.com/img/badges/[email protected]", null, "https://www.studypool.com/images/[email protected]", null, "https://www.studypool.com/images/newblur.png", null, "https://www.studypool.com/images/payment_row1.png", null, "https://www.studypool.com/images/payment_row2.png", null, "https://www.studypool.com/img/systemavatars/student-pre10.jpg", null, "https://www.studypool.com/pictures/icons/schools@2x/Brown University.png", null, "https://www.studypool.com/pictures/icons/schools@2x/Caltech.png", null, "https://www.studypool.com/pictures/icons/schools@2x/Carnegie Mellon University.png", null, "https://www.studypool.com/pictures/icons/schools@2x/Columbia University.png", null, "https://www.studypool.com/pictures/icons/schools@2x/Dartmouth University.png", null, "https://www.studypool.com/pictures/icons/schools@2x/Emory.png", null, "https://www.studypool.com/pictures/icons/schools@2x/Harvard University.png", null, "https://www.studypool.com/pictures/icons/schools@2x/MIT.png", null, "https://www.studypool.com/pictures/icons/schools@2x/New York University.png", null, "https://www.studypool.com/pictures/icons/schools@2x/Notre Dame University.png", null, "https://www.studypool.com/pictures/icons/schools@2x/Oklahoma University.png", null, "https://www.studypool.com/pictures/icons/schools@2x/Penn State University.png", null, "https://www.studypool.com/pictures/icons/schools@2x/Princeton University.png", null, "https://www.studypool.com/pictures/icons/schools@2x/Stanford University.png", null, "https://www.studypool.com/pictures/icons/schools@2x/University of California.png", null, "https://www.studypool.com/pictures/icons/schools@2x/University of Oxford.png", null, "https://www.studypool.com/pictures/icons/schools@2x/Yale University.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5453246,"math_prob":0.9798563,"size":1288,"snap":"2019-13-2019-22","text_gpt3_token_len":641,"char_repetition_ratio":0.22196262,"word_repetition_ratio":0.05514706,"special_character_ratio":0.53571427,"punctuation_ratio":0.004,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9967287,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T16:31:58Z\",\"WARC-Record-ID\":\"<urn:uuid:afed8376-37b0-46df-8d4f-5d11f6f0406a>\",\"Content-Length\":\"92504\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a748ab30-5537-4ef2-8798-c685b129be2c>\",\"WARC-Concurrent-To\":\"<urn:uuid:8add080c-8a30-4941-9103-dd5287473228>\",\"WARC-IP-Address\":\"104.20.63.22\",\"WARC-Target-URI\":\"https://www.studypool.com/discuss/198611/7x-2-1-3-7x-5-1-3-3-1\",\"WARC-Payload-Digest\":\"sha1:ASBHQIOE3KJ2DDSIWE77WMY67SN72YXK\",\"WARC-Block-Digest\":\"sha1:3AL35TSKRQUBRXQN4T3NCOQNS2LAPNRU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204077.10_warc_CC-MAIN-20190325153323-20190325175323-00142.warc.gz\"}"}
https://codeahoy.com/questions/ds-interview/25/
[ "# What are the main parameters of the random forest model?\n\n• `max_depth`: Longest Path between root node and the leaf\n• `min_sample_split`: The minimum number of observations needed to split a given node\n• `max_leaf_nodes`: Conditions the splitting of the tree and hence, limits the growth of the trees\n• `min_samples_leaf`: minimum number of samples in the leaf node\n• `n_estimators`: Number of trees\n• `max_sample`: Fraction of original dataset given to any individual tree in the given model\n• `max_features`: Limits the maximum number of features provided to trees in random forest model\n\n## Selecting the depth of the trees in random forest\n\nThe greater the depth, the greater amount of information is extracted from the tree, however, there is a limit to this, and the algorithm even if defensive against overfitting may learn complex features of noise present in data and as a result, may overfit on noise. Hence, there is no hard thumb rule in deciding the depth, but literature suggests a few tips on tuning the depth of the tree to prevent overfitting:\n\n• limit the maximum depth of a tree\n• limit the number of test nodes\n• limit the minimum number of objects at a node required to split\n• do not split a node when, at least, one of the resulting subsample sizes is below a given threshold\n• stop developing a node if it does not sufficiently improve the fit.\n\n## How many trees we need in random forest?\n\nThe number of trees in random forest is worked by n_estimators, and a random forest reduces overfitting by increasing the number of trees. There is no fixed thumb rule to decide the number of trees in a random forest, it is rather fine tuned with the data, typically starting off by taking the square of the number of features (n) present in the data followed by tuning until we get the optimal results." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91612524,"math_prob":0.9685932,"size":1779,"snap":"2022-40-2023-06","text_gpt3_token_len":373,"char_repetition_ratio":0.15549296,"word_repetition_ratio":0.0,"special_character_ratio":0.19955032,"punctuation_ratio":0.08108108,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97611535,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T18:02:01Z\",\"WARC-Record-ID\":\"<urn:uuid:0e893050-822f-4383-b68a-e47559409a50>\",\"Content-Length\":\"15735\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e614cc27-c386-4ddf-b11a-d8a72c19fd2c>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c860a57-4adf-4919-bb43-e6c1f7877617>\",\"WARC-IP-Address\":\"104.21.34.219\",\"WARC-Target-URI\":\"https://codeahoy.com/questions/ds-interview/25/\",\"WARC-Payload-Digest\":\"sha1:WS5XGLILVR7LVUS4EKYPC6TID5DST6OP\",\"WARC-Block-Digest\":\"sha1:OTBYNAHXS4C7N43RFGXVYD5TMVRGZFL2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500151.93_warc_CC-MAIN-20230204173912-20230204203912-00112.warc.gz\"}"}
https://answers.everydaycalculation.com/compare-fractions/5-4-and-8-7
[ "Solutions by everydaycalculation.com\n\n## Compare 5/4 and 8/7\n\n1st number: 1 1/4, 2nd number: 1 1/7\n\n5/4 is greater than 8/7\n\n#### Steps for comparing fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 4 and 7 is 28\n2. For the 1st fraction, since 4 × 7 = 28,\n5/4 = 5 × 7/4 × 7 = 35/28\n3. Likewise, for the 2nd fraction, since 7 × 4 = 28,\n8/7 = 8 × 4/7 × 4 = 32/28\n4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction\n5. 35/28 > 32/28 or 5/4 > 8/7\n\n#### Compare Fractions Calculator\n\nand\n\nUse fraction calculator with our all-in-one calculator app: Download for Android, Download for iOS" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8628155,"math_prob":0.9960614,"size":436,"snap":"2019-13-2019-22","text_gpt3_token_len":209,"char_repetition_ratio":0.3310185,"word_repetition_ratio":0.0,"special_character_ratio":0.49541286,"punctuation_ratio":0.06349207,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9950061,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T04:02:14Z\",\"WARC-Record-ID\":\"<urn:uuid:c483450c-ebfa-4d90-979a-310a205f6c22>\",\"Content-Length\":\"8536\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:14e8dd3c-4a51-48bb-9a30-7a21a9c5fd8c>\",\"WARC-Concurrent-To\":\"<urn:uuid:32527638-4645-4a87-94de-757b8614476c>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/compare-fractions/5-4-and-8-7\",\"WARC-Payload-Digest\":\"sha1:NJSAV3EOCUARRGPP7DZEO6DBNL4ARYQX\",\"WARC-Block-Digest\":\"sha1:6Y2ITX3FLUHKHZXDO5QAYX6A3S3BFMNS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204790.78_warc_CC-MAIN-20190326034712-20190326060712-00072.warc.gz\"}"}
https://metanumbers.com/19619
[ "## 19619\n\n19,619 (nineteen thousand six hundred nineteen) is an odd five-digits composite number following 19618 and preceding 19620. In scientific notation, it is written as 1.9619 × 104. The sum of its digits is 26. It has a total of 2 prime factors and 4 positive divisors. There are 18,744 positive integers (up to 19619) that are relatively prime to 19619.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 5\n• Sum of Digits 26\n• Digital Root 8\n\n## Name\n\nShort name 19 thousand 619 nineteen thousand six hundred nineteen\n\n## Notation\n\nScientific notation 1.9619 × 104 19.619 × 103\n\n## Prime Factorization of 19619\n\nPrime Factorization 23 × 853\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 19619 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 19,619 is 23 × 853. Since it has a total of 2 prime factors, 19,619 is a composite number.\n\n## Divisors of 19619\n\n1, 23, 853, 19619\n\n4 divisors\n\n Even divisors 0 4 2 2\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 20496 Sum of all the positive divisors of n s(n) 877 Sum of the proper positive divisors of n A(n) 5124 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 140.068 Returns the nth root of the product of n divisors H(n) 3.82884 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 19,619 can be divided by 4 positive divisors (out of which 0 are even, and 4 are odd). The sum of these divisors (counting 19,619) is 20,496, the average is 5,124.\n\n## Other Arithmetic Functions (n = 19619)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 18744 Total number of positive integers not greater than n that are coprime to n λ(n) 9372 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 2228 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 18,744 positive integers (less than 19,619) that are coprime with 19,619. And there are approximately 2,228 prime numbers less than or equal to 19,619.\n\n## Divisibility of 19619\n\n m n mod m 2 3 4 5 6 7 8 9 1 2 3 4 5 5 3 8\n\n19,619 is not divisible by any number less than or equal to 9.\n\n## Classification of 19619\n\n• Arithmetic\n• Semiprime\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n\n## Base conversion (19619)\n\nBase System Value\n2 Binary 100110010100011\n3 Ternary 222220122\n4 Quaternary 10302203\n5 Quinary 1111434\n6 Senary 230455\n8 Octal 46243\n10 Decimal 19619\n12 Duodecimal b42b\n16 Hexadecimal 4ca3\n20 Vigesimal 290j\n36 Base36 f4z\n\n## Basic calculations (n = 19619)\n\n### Multiplication\n\nn×i\n n×2 39238 58857 78476 98095\n\n### Division\n\nni\n n⁄2 9809.5 6539.67 4904.75 3923.8\n\n### Exponentiation\n\nni\n n2 384905161 7551454353659 148151982964435921 2906593753779268334099\n\n### Nth Root\n\ni√n\n 2√n 140.068 26.9707 11.835 7.21997\n\n## 19619 as geometric shapes\n\n### Circle\n\nRadius = n\n Diameter 39238 123270 1.20922e+09\n\n### Sphere\n\nRadius = n\n Volume 3.16315e+13 4.83686e+09 123270\n\n### Square\n\nLength = n\n Perimeter 78476 3.84905e+08 27745.5\n\n### Cube\n\nLength = n\n Surface area 2.30943e+09 7.55145e+12 33981.1\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 58857 1.66669e+08 16990.6\n\n### Triangular Pyramid\n\nLength = n\n Surface area 6.66675e+08 8.89947e+11 16018.8\n\n## Cryptographic Hash Functions\n\nmd5 846f87c3be78ef2dbb46bad3d6ec911f a2b74596f317398560f335ef2e6c728c620c4e60 9c6042f6ca810abdc6dd1cee01d068ba5acc1aa6ea3a3afefdb9c86b09c60d3f d752ab9203f927260b0aba838dcd011ef807570d2bef962197918edbdad20e3268ec747021e9776dd61c447e577207384ed91aa8f76a6824bd5360cb5d452447 d75c8cdd6b4d4d45bf18477d5e06e1f19a9372a4" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61724997,"math_prob":0.97595155,"size":4522,"snap":"2021-21-2021-25","text_gpt3_token_len":1592,"char_repetition_ratio":0.118857905,"word_repetition_ratio":0.029498525,"special_character_ratio":0.4444936,"punctuation_ratio":0.07503234,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99558556,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-13T11:19:28Z\",\"WARC-Record-ID\":\"<urn:uuid:cc175972-5e8e-47d8-a5c0-08f774cb9a1a>\",\"Content-Length\":\"48148\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f184b368-4040-42a3-9322-434ee6cf734d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9c57569-f56f-40de-8a67-040fcd1217bb>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/19619\",\"WARC-Payload-Digest\":\"sha1:USXTGIWSCGDDX7HOZHHJJDAXXBFIDPZZ\",\"WARC-Block-Digest\":\"sha1:6QAZZSACJSXO34BISTZM54WFGYTPZ6X3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487608702.10_warc_CC-MAIN-20210613100830-20210613130830-00463.warc.gz\"}"}
https://www.physicsforums.com/threads/integral-question.180117/
[ "# Integral question\n\nGold Member\n\n## Homework Statement\n\nf is a function defined as: for all x>-1 f(x) = 1/((ln(x+1))^2 + 1) and for x=-1 f(x)=0.\na) Prove that $$F(x) = \\int^{x^2 + 2x}_{0} f(t)dt$$ is defined in R and has a derivative there. Find the derivative.\n\nb) g is defined as: for all x>-1 g(x) = f(x) and for x=-1 g(x) = -1.\nIs $$G(x) = \\int^{x^2 + 2x}_{0} g(t)dt$$ defined in R? Does it have a derivative there?\n\n## The Attempt at a Solution\n\na)first I proved that f is continues for all x>=-1 Then since x^2+2x>0 for all x in R F(x) is defined and it's derivative is easy to get with the chain rule. I got:\nF'(x) = (2x+2)/((ln(x^2 + 2x +1))^2 + 1)\n\nb)Since g differs from f in only one point F(x) = G(x) and so G is defined and has a derivative for all x in R.\n\nIs that right? I'm mostly worried about (b).\nThanks.\n\nRelated Calculus and Beyond Homework Help News on Phys.org\nGib Z\nHomework Helper\nLooks good to me :) Other than the small error than G is defined and has a derivative for all x > -1, not all real values.\n\nHallsofIvy" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9054408,"math_prob":0.999419,"size":851,"snap":"2020-45-2020-50","text_gpt3_token_len":316,"char_repetition_ratio":0.15348288,"word_repetition_ratio":0.0121951215,"special_character_ratio":0.38895416,"punctuation_ratio":0.065727696,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999502,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T04:26:46Z\",\"WARC-Record-ID\":\"<urn:uuid:21b3a75c-4cc0-4d3f-9616-4e29d8ce7652>\",\"Content-Length\":\"68464\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02d43c3e-ba13-4f19-bfe1-6f2d65a1ad4c>\",\"WARC-Concurrent-To\":\"<urn:uuid:2093e4fe-3f27-48cc-a94b-1eef6b51f8aa>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/integral-question.180117/\",\"WARC-Payload-Digest\":\"sha1:MSFFVWG57KWRB36ZATDEO2DSR54E327G\",\"WARC-Block-Digest\":\"sha1:QFOTV44G5QPP7QD4OJOAX4O2ANTNS3RQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141196324.38_warc_CC-MAIN-20201129034021-20201129064021-00506.warc.gz\"}"}
http://sjce.journals.sharif.edu/article_786.html
[ "# مدل‌سازی پی پوسته‌یی مخروطی بتن مسلح با شیب‌های مختلف\n\nنوع مقاله : پژوهشی\n\nنویسندگان\n\nگروه مهندسی عمران، دانشگاه فردوسی مشهد\n\nچکیده\n\nپی‌های پوسته‌یی از جمله پی‌هایی هستند که با توجه به ساختار و شکل‌شان در مقایسه با انواع مسطح دارای ضخامت‌های کمتر و مقاومت بیشتری هستند. این کاهش ضخامت به صرفه‌جویی در مصالح و کاهش هزینه‌ها منجر می‌شود. علاوه بر این، شکل خاص آنها باعث افزایش باربری خاک می‌شود و خصوصیات رفتاری به شکل چشم‌گیری بهبود پیدا می‌کند. در این پژوهش با مدل‌سازی، رفتار تعدادی پی پوسته‌یی مخروطی بتن مسلح با ابعاد کاربردی با روش اجزاء محدود و نرم‌افزار A‌B‌A‌Q‌U‌S مورد بررسی قرار گرفته است. در این تحلیل، که به‌صورت غیرخطی مصالح است، بتن، خاک و فولادها کاملاً مدل شده‌اند. نتایج عددی نشان می‌دهند که پی پوسته‌یی مخروطی در مقایسه با نوع مسطح آن، دارای مقاومت بیشتری است. با افزایش ارتفاع پی مخروطی، مقاومت افزایش می‌یابد و در ارتفاعی بهینه، بیشترین مقاومت را خواهد داشت.\n\nکلیدواژه‌ها\n\nعنوان مقاله [English]\n\n### A‌N‌A‌L‌Y‌S‌I‌S O‌F R‌E‌I‌N‌F‌O‌R‌C‌E‌D C‌O‌N‌C‌R‌E‌T‌E C‌O‌N‌E S‌H‌E‌L‌L F‌O‌U‌N‌D‌A‌T‌I‌O‌N‌S W‌I‌T‌H D‌I‌F‌F‌E‌R‌E‌N‌T S‌L‌O‌P‌E‌S\n\nنویسندگان [English]\n\n• M. S‌h‌e‌i‌k‌h‌i\n• H. H‌a‌j‌i K‌a‌z‌e‌m‌i\nD‌e‌p‌t. o‌f C‌i‌v‌i‌l E‌n‌g‌i‌n‌e‌e‌r‌i‌n‌g F‌e‌r‌d‌o‌w‌s‌i U‌n‌i‌v‌e‌r‌s‌i‌t‌y o‌f M‌a‌s‌h‌h‌a‌d\nچکیده [English]\n\nS‌h‌e‌l‌l f‌o‌u‌n‌d‌a‌t‌i‌o‌n‌s a‌r‌e s‌t‌r‌u‌c‌t‌u‌r‌e‌s w‌h‌i‌c‌h d‌e‌r‌i‌v‌e t‌h‌e‌i‌r s‌t‌r‌e‌n‌g‌t‌h f‌r‌o‌m g‌e‌o‌m‌e‌t‌r‌y r‌a‌t‌h‌e‌r t‌h‌a‌n m‌a‌s‌s. T‌h‌i‌s q‌u‌a‌l‌i‌t‌y e‌n‌a‌b‌l‌e‌s t‌h‌e‌m t‌o o‌b‌t‌a‌i‌n m‌a‌x‌i‌m‌u‌m s‌t‌r‌u‌c‌t‌u‌r‌a‌l i‌n‌t‌e‌g‌r‌i‌t‌y w‌i‌t‌h a m‌i‌n‌i‌m‌u‌m c‌o‌n‌s‌u‌m‌p‌t‌i‌o‌n o‌f c‌o‌n‌s‌t‌r‌u‌c‌t‌i‌o‌n m‌a‌t‌e‌r‌i‌a‌l‌s. T‌h‌e u‌s‌e o‌f s‌h‌e‌l‌l‌s i‌n t‌h‌e f‌i‌e‌l‌d o‌f f‌o‌u‌n‌d‌a‌t‌i‌o‌n e‌n‌g‌i‌n‌e‌e‌r‌i‌n‌g h‌a‌s d‌r‌a‌w‌n c‌o‌n‌s‌i‌d‌e‌r‌a‌b‌l‌e i‌n‌t‌e‌r‌e‌s‌t i‌n d‌i‌f‌f‌e‌r‌e‌n‌t p‌a‌r‌t‌s o‌f t‌h‌e w‌o‌r‌l‌d. S‌h‌e‌l‌l f‌o‌u‌n‌d‌a‌t‌i‌o‌n‌s o‌f d‌i‌f‌f‌e‌r‌e‌n‌t s‌h‌a‌p‌e‌s h‌a‌v‌e b‌e‌e‌n i‌n‌v‌e‌s‌t‌i‌g‌a‌t‌e‌d, o‌n t‌h‌e s‌t‌r‌u‌c‌t‌u‌r‌a‌l a‌n‌d g‌e‌o‌t‌e‌c‌h‌n‌i‌c‌a‌l s‌i‌d‌e, a‌t t‌h‌e e‌l‌a‌s‌t‌i‌c s‌t‌a‌g‌e. B‌e‌c‌a‌u‌s‌e c‌l‌o‌s‌e‌d-f‌o‌r‌m s‌o‌l‌u‌t‌i‌o‌n‌s a‌r‌e e‌x‌t‌r‌e‌m‌e‌l‌y c‌o‌m‌p‌l‌e‌x, e‌s‌p‌e‌c‌i‌a‌l‌l‌y a‌t u‌l‌t‌i‌m‌a‌t‌e a‌n‌d n‌o‌n‌l‌i‌n‌e‌a‌r s‌t‌a‌g‌e‌s, t‌h‌e p‌r‌e‌s‌e‌n‌t i‌n‌v‌e‌s‌t‌i‌g‌a‌t‌i‌o‌n h‌a‌s r‌e‌s‌o‌r‌t‌e‌d t‌o n‌u‌m‌e‌r‌i‌c‌a‌l a‌n‌a‌l‌y‌s‌i‌s b‌y t‌h‌e f‌i‌n‌i‌t‌e e‌l‌e‌m‌e‌n‌t m‌e‌t‌h‌o‌d, u‌s‌i‌n‌g t‌h‌e A‌B‌A‌Q‌U‌S a‌n‌a‌l‌y‌s‌i‌s p‌a‌c‌k‌a‌g‌e. T‌h‌e c‌o‌n‌i‌c‌a‌l s‌h‌e‌l‌l i‌s t‌h‌e s‌i‌m‌p‌l‌e‌s‌t f‌o‌r‌m o‌f s‌h‌e‌l‌l t‌h‌a‌t c‌a‌n b‌e e‌m‌p‌l‌o‌y‌e‌d i‌n f‌o‌u‌n‌d‌a‌t‌i‌o‌n e‌n‌g‌i‌n‌e‌e‌r‌i‌n‌g, d‌u‌e t‌o i‌t‌s s‌i‌n‌g‌l‌y c‌u‌r‌v‌e‌d s‌u‌r‌f‌a‌c‌e. R‌e‌i‌n‌f‌o‌r‌c‌e‌d c‌o‌n‌c‌r‌e‌t‌e c‌o‌n‌i‌c‌a‌l s‌h‌e‌l‌l f‌o‌u‌n‌d‌a‌t‌i‌o‌n‌s h‌a‌v‌e b‌e‌e‌n t‌a‌k‌e‌n u‌p f‌o‌r t‌h‌e‌s‌e s‌t‌u‌d‌i‌e‌s. T‌h‌e‌r‌e i‌s c‌l‌o‌s‌e a‌g‌r‌e‌e‌m‌e‌n‌t b‌e‌t‌w‌e‌e‌n t‌h‌e a‌n‌a‌l‌y‌t‌i‌c‌a‌l a‌n‌d t‌e‌s‌t r‌e‌s‌u‌l‌t‌s, a‌n‌d t‌o s‌h‌o‌w t‌h‌a‌t, t‌h‌e r‌e‌s‌u‌l‌t‌s o‌f t‌h‌e t‌e‌s‌t c‌o‌n‌d‌u‌c‌t‌e‌d o‌n t‌h‌e e‌l‌a‌s‌t‌i‌c m‌o‌d‌e‌l o‌f a c‌o‌n‌i‌c‌a‌l s‌h‌e‌l‌l f‌o‌o‌t‌i‌n‌g b‌y K‌u‌r‌i‌a‌n (2006) h‌a‌v‌e b‌e‌e‌n r‌e‌p‌r‌e‌s‌e‌n‌t‌e‌d i‌n t‌h‌i‌s p‌a‌p‌e‌r a‌n‌d c‌o‌m‌p‌a‌r‌e‌d w‌i‌t‌h f‌i‌n‌i‌t‌e e‌l‌e‌m‌e‌n‌t r‌e‌s‌u‌l‌t‌s. T‌h‌e b‌e‌h‌a‌v‌i‌o‌r o‌f c‌o‌n‌c‌r‌e‌t‌e, s‌o‌i‌l a‌n‌d b‌a‌r‌s h‌a‌s b‌e‌e‌n s‌t‌u‌d‌i‌e‌d i‌n n‌o‌n‌l‌i‌n‌e‌a‌r f‌o‌r‌m f‌o‌r t‌h‌e‌s‌e k‌i‌n‌d‌s o‌f s‌h‌e‌l‌l f‌o‌u‌n‌d‌a‌t‌i‌o‌n. T‌h‌e M‌o‌h‌r-C‌o‌u‌l‌o‌m‌b p‌l‌a‌s‌t‌i‌c‌i‌t‌y m‌o‌d‌e‌l i‌s u‌s‌e‌d t‌o m‌o‌d‌e‌l s‌o‌i‌l‌s w‌i‌t‌h t‌h‌e c‌l‌a‌s‌s‌i‌c‌a‌l M‌o‌h‌r-C‌o‌u‌l‌o‌m‌b y‌i‌e‌l‌d c‌r‌i‌t‌e‌r‌i‌o‌n. T‌h‌e c‌o‌n‌c‌r‌e‌t‌e d‌a‌m‌a‌g‌e‌d p‌l‌a‌s‌t‌i‌c‌i‌t‌y m‌o‌d‌e‌l p‌r‌o‌v‌i‌d‌e‌d i‌n A‌B‌A‌Q‌U‌S i‌s u‌s‌e‌d f‌o‌r t‌h‌e a‌n‌a‌l‌y‌s‌i‌s o‌f c‌o‌n‌c‌r‌e‌t‌e. T‌h‌e r‌e‌s‌u‌l‌t‌s p‌r‌e‌s‌e‌n‌t‌e‌d r‌e‌v‌e‌a‌l t‌h‌e g‌e‌n‌e‌r‌a‌l s‌u‌p‌e‌r‌i‌o‌r‌i‌t‌y o‌f c‌o‌n‌i‌c‌a‌l s‌h‌e‌l‌l f‌o‌u‌n‌d‌a‌t‌i‌o‌n‌s. T‌h‌e i‌n‌c‌r‌e‌a‌s‌i‌n‌g s‌e‌t‌t‌l‌e‌m‌e‌n‌t r‌a‌t‌e o‌f a s‌h‌e‌l‌l f‌o‌u‌n‌d‌a‌t‌i‌o‌n i‌n‌c‌r‌e‌a‌s‌e‌s w‌i‌t‌h i‌n‌c‌r‌e‌a‌s‌i‌n‌g t‌h‌e l‌o‌a‌d. T‌h‌i‌s i‌s b‌e‌c‌a‌u‌s‌e o‌f t‌h‌e n‌o‌n‌l‌i‌n‌e‌a‌r b‌e‌h‌a‌v‌i‌o‌r o‌f s‌o‌i‌l a‌n‌d c‌o‌n‌c‌r‌e‌t‌e a‌n‌d t‌h‌e d‌e‌c‌r‌e‌a‌s‌e i‌n s‌t‌i‌f‌f‌n‌e‌s‌s o‌f t‌h‌e‌s‌e m‌a‌t‌e‌r‌i‌a‌l‌s. M‌a‌x‌i‌m‌u‌m s‌e‌t‌t‌l‌e‌m‌e‌n‌t i‌s s‌h‌o‌w‌n u‌n‌d‌e‌r t‌h‌e c‌e‌n‌t‌e‌r o‌f t‌h‌e s‌h‌e‌l‌l f‌o‌u‌n‌d‌a‌t‌i‌o‌n, b‌e‌n‌e‌a‌t‌h t‌h‌e l‌o‌a‌d. T‌h‌e s‌e‌t‌t‌l‌e‌m‌e‌n‌t o‌f t‌h‌e e‌d‌g‌e o‌f t‌h‌e s‌h‌e‌l‌l i‌s l‌e‌s‌s t‌h‌a‌n a‌t t‌h‌e c‌e‌n‌t‌e‌r o‌f t‌h‌e s‌h‌e‌l‌l. B‌u‌t, t‌h‌i‌s d‌i‌f‌f‌e‌r‌e‌n‌c‌e i‌s n‌o‌t s‌o s‌e‌n‌s‌i‌b‌l‌e, b‌e‌c‌a‌u‌s‌e o‌f t‌h‌e h‌i‌g‌h s‌t‌i‌f‌f‌n‌e‌s‌s o‌f s‌h‌e‌l‌l f‌o‌u‌n‌d‌a‌t‌i‌o‌n‌s, a‌n‌d i‌s a‌b‌o‌u‌t 9\\% f‌o‌r t‌h‌e c‌o‌n‌d‌i‌t‌i‌o‌n o‌f t‌h‌i‌s i‌n‌v‌e‌s‌t‌i‌g‌a‌t‌i‌o‌n. T‌h‌e r‌e‌s‌i‌s‌t‌a‌n‌c‌e o‌f t‌h‌e s‌y‌s‌t‌e‌m i‌s i‌n‌c‌r‌e‌a‌s‌e‌d b‌y i‌n‌c‌r‌e‌a‌s‌i‌n‌g t‌h‌e a‌n‌g‌l‌e o‌f t‌h‌e c‌o‌n‌e w‌a‌l‌l. T‌h‌i‌s i‌n‌c‌r‌e‌a‌s‌e h‌a‌s a‌n o‌p‌t‌i‌m‌u‌m a‌n‌g‌l‌e, w‌h‌i‌c‌h i‌s a‌b‌o‌u‌t 40 d‌e‌g‌r‌e‌e‌s. T‌h‌e‌r‌e i‌s a‌b‌o‌u‌t 90\\% i‌n‌c‌r‌e‌a‌s‌e i‌n b‌e‌a‌r‌i‌n‌g c‌a‌p‌a‌c‌i‌t‌y f‌o‌r t‌h‌e o‌p‌t‌i‌m‌u‌m a‌n‌g‌l‌e o‌f t‌h‌e r‌e‌i‌n‌f‌o‌r‌c‌e‌d c‌o‌n‌c‌r‌e‌t‌e c‌o‌n‌e s‌h‌e‌l‌l f‌o‌u‌n‌d‌a‌t‌i‌o‌n c‌o‌m‌p‌a‌r‌e‌d t‌o i‌t‌s f‌l‌a‌t c‌o‌u‌n‌t‌e‌r‌p‌a‌r‌t.\n\nکلیدواژه‌ها [English]\n\n• C‌o‌n‌i‌c‌a‌l s‌h‌e‌l‌l f‌o‌u‌n‌d‌a‌t‌i‌o‌n\n• f‌i‌n‌i‌t‌e e‌l‌e‌m‌e‌n‌t m‌e‌t‌h‌o‌d\n• R‌e‌i‌n‌f‌o‌r‌c‌e‌d c‌o‌n‌c‌r‌e‌t‌e\n• s‌o‌i‌l s‌t‌r‌u‌c‌t‌u‌r‌e i‌n‌t‌e‌r‌a‌c‌t‌i‌o‌n" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9151651,"math_prob":0.98897505,"size":2524,"snap":"2023-14-2023-23","text_gpt3_token_len":551,"char_repetition_ratio":0.14761905,"word_repetition_ratio":0.0,"special_character_ratio":0.18383518,"punctuation_ratio":0.08139535,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9838878,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-24T03:15:56Z\",\"WARC-Record-ID\":\"<urn:uuid:2bfb342f-4a42-4d76-b2f9-01c638b5d378>\",\"Content-Length\":\"57448\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bc523e91-130c-4900-ab73-e6c7fb56a082>\",\"WARC-Concurrent-To\":\"<urn:uuid:58f4b1f1-b048-4500-bae2-501b4d1cd294>\",\"WARC-IP-Address\":\"81.31.168.62\",\"WARC-Target-URI\":\"http://sjce.journals.sharif.edu/article_786.html\",\"WARC-Payload-Digest\":\"sha1:S4T3ADB6SHL6PSJOECGU6BXHW73DMD7X\",\"WARC-Block-Digest\":\"sha1:CJEHBLV2LHR26KZQDY4AMF7DDDW7ZQXT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945242.64_warc_CC-MAIN-20230324020038-20230324050038-00483.warc.gz\"}"}
http://www.uskino.com/articleshow_208.html
[ "Technical articles\n\nArticle The current location: Home > measure method of contact angle meter > Contact angle meter is not just a contact angle goniometer\n\n# Contact angle meter is not just a contact angle goniometer\n\nNumber of hits:5904    Release time:2020-08-25 00:00:00\n\nContact angle meter is not just a contact angle goniometer:  the difference of the contact angle meter and goniometer, from traditional to modern instruments.\n\nAt presen, you can easily find a contact angle meter for measuring contact angle between the solid and liquid. Among them, some instruments are the real one which is used for measuring contact angle, but others are not.\n\n1, Contact angle meter based on Circle Fitting method and Ellipse, Tangent method is just a Goniometer.\n\nMost instruments of the commercialized contact angle meters use Circle Fitting, Ellipse Fitting or Tangent Fitting method. These methods worked until there is a better method to characterize the physicochemical property of the solid. These methods cannot correct the effect of the gravity, surface tension, contact angle, interface tension between the solid and liquid, and just measure the angle of the geometric curve. This doesn't make any physical chemistry sense. Instruments with geometric curve fitting method cannot tell you more. And the result of the contact angle may also mislead you, they have no repeatability and comparison.\n\n(1)    Gravity will affect the value of contact angle even if you dose a drop with volume 2uL.\n\n(2)    Ellipse fitting method cannot correct the effect of gravity. And it can only fit 80% drop shape not all.\n\n(3)    Contact angle is the physicochemical property of the solid and it’s more than an angle of curve.\n\n2, the history of the contact angle meter.\n\nIn the 1950s, Pro. Zisman made the first contact angle goniometer to measure contact angle of the solid. They used a reticle in the microscope and rotated the reticle manually to get the value of contact angle.( W.C.Bigelow, D.L.Pickett.W.A.J.Zisman, Coll.Sci, 1,513 (1946))", null, "In the end of 1980s, With the development of computer technology, commercialized contact angle meter used the image analysis technique to measure the contact angle by capturing the drop image, finding the drop edge from captured image, then fitting the drop profile by the Circle, Ellipse, Tangent (polynomial, composite type equation or spline) curve. Now, most commercialized contact angle meter still regards these methods as the main or only methods to measure contact angle. So, they are just a goniometer and not the real sense of contact angle meter.\n\nIn the end of 1980s, Pro. A.W.Neumann proposed a new method named ADSA-P to measure surface tension of the liquid or interface tension between the liquid and fluid. They used the Young-Laplace Equation which included the surface tension, interface tension and the gravity coefficient, and contact angle. But they used the ADSA-P method mostly for measuring surface tension or interface tension, not mainly for measuring contact angle. And the assumption of axisymmetric liquids limited the application of ADSA-P in the measurement of contact angle.(Rotenberg, Y., Boruvka, L. and Neumann, A.W. Determination of Surface Tension and Contact Angle from the Shapes of Axisymmetric Fluid Interfaces J. Colloid Interface Sci. 93 p.169-183))", null, "In the 1990s, some commercialized contact angle meters made in German provide the Young-Laplace Equation fitting method to measure contact angle. The Young-Laplace Equation fitting method is used for measuring contact angle more than 120°, these machines still regard the circle fitting or ellipse fitting method as the main method to measure contact angle, because Young-Laplace equation method can only measure contact angle of axisymmetric drop. But the truth is drop shape of 98% surface of solid is asymmetric due to surface roughness, chemical diversity and heterogeneity of surface structure.", null, "So, ADSA-P or the Young-Laplace equation fitting method is the real method for measuring contact angle, and the result from it is more acceptable than the geometric curve fitting method. And the assumption of axisymmetric liquids limited the application of ADSA-P in the measurement of contact angle.\n\n3, The truth that we have to accept: 98% surface of solid is asymmetric.\n\nYou cannot find out a perfect circle drop from the top view of the drop shapes at the front windshield. And the same, more than 98% surfaces of solid have irregular drop shape.", null, "", null, "", null, "So, the most effective and practical method is measuring contact angle of solid surface based on axisymmetric drop.", null, "4, ADSA-RealDrop / TrueDrop is what you need for measuring contact angle between solid and liquid.\n\nADSA-Realdrop /TrueDrop based on ADSA-NA, correcting the effect of gravity, measures the axisymmetric drop shape of solid. And we will provide you the equilibrium contact angle based on Tadmor method. (At the drop image shown below, left contact angle is 155.557 and right contact angle is 146.842)", null, "(1)3D contact angle,\n\n(2) the cleanness of the surface by just one drop.\n\n(3) homogeneity of your sample ( A difference value than 2°from the left and right contact angle means your sample is not homogeneity)\n\n(4) design special surface like rice leaf or bamboo leaf. So that the drop can only roll off by one direction).\n\nOnly ADSA-RealDrop can guarantee a real contact angle meter. Unlike the Young-Laplace Equation fitting method can only be used to measure 2% axisymmetric drop shape, ADSA-RealDrop method is the only method which can be used to measure 100% surface of solid.\n\n5, Technical route of the 3D contact angle\n\n(1)The most simple way to measure 3D contact angle:  Rotating the sample by 360° horizontal direction (The Patent of KINO)\n\n(2) Using the 360° lens to capture the drop images by different view directions. (The patent of KINO)", null, "(3) 3D camera such as TOF camera\n\n6, A sample of measuring contact angle by various method:\n\nA  contact angle chromatograms of setaria viridis leaf.", null, "(1) When it's perpendicular to the lens, the contact angle chromatograms of setaria viridis leaf as shown below. We can find the microvilli of the leaf.", null, "The advanced and receded contact angle of the setaria viridis leaf .", null, "If you try to fiiting the same drop shape with other methods, you will not get the same acceptable result as the ADSA-RealDrop\n\n(1) Young-Laplace equation fitting method for measuring contact angle", null, "(2) Ellipse fittting method for measuring contact angle.", null, "" ]
[ null, "http://www.uskino.com/uploadFiles/Image/636805651220408750.jpg", null, "http://www.uskino.com/uploadFiles/Image/636805660368846250.png", null, "http://www.uskino.com/uploadFiles/Image/636805661302752500.png", null, "http://www.uskino.com/uploadFiles/Image/636805657883221250.jpg", null, "http://www.uskino.com/uploadFiles/Image/636805658161502500.png", null, "http://www.uskino.com/uploadFiles/Image/636805658427440000.jpg", null, "http://www.uskino.com/uploadFiles/Image/636805653909940000.png", null, "http://www.uskino.com/uploadFiles/Image/636805665033065000.png", null, "http://www.uskino.com/uploadFiles/Image/636805662921033750.png", null, "http://www.uskino.com/uploadFiles/Image/636805662160096250.jpg", null, "http://www.uskino.com/uploadFiles/Image/636805668774002500.jpg", null, "http://www.uskino.com/uploadFiles/Image/636805669782908750.jpg", null, "http://www.uskino.com/uploadFiles/Image/636805670845096250.png", null, "http://www.uskino.com/uploadFiles/Image/636805671877752500.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8648951,"math_prob":0.7291349,"size":6912,"snap":"2020-45-2020-50","text_gpt3_token_len":1521,"char_repetition_ratio":0.21815287,"word_repetition_ratio":0.0407785,"special_character_ratio":0.20341435,"punctuation_ratio":0.11677116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95159703,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T22:43:21Z\",\"WARC-Record-ID\":\"<urn:uuid:d4cd1122-e100-43cc-924f-3f55569380c0>\",\"Content-Length\":\"29468\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:efb72d42-b4e9-401f-a570-3183e558fd0e>\",\"WARC-Concurrent-To\":\"<urn:uuid:d8f3176c-8a73-45b4-bee9-15ea756c4235>\",\"WARC-IP-Address\":\"47.88.77.65\",\"WARC-Target-URI\":\"http://www.uskino.com/articleshow_208.html\",\"WARC-Payload-Digest\":\"sha1:RSHBDHNNWLK5YBPNVKA2MLIXBT7FRN3G\",\"WARC-Block-Digest\":\"sha1:3IK73ZXAPL5ODX3M4MNJZEIFR3EALQRS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141177607.13_warc_CC-MAIN-20201124224124-20201125014124-00226.warc.gz\"}"}
https://www.karo-bhz.pl/May-03/4001.html
[ "# tensor products\n\n•", null, "### ProductsTensor Global\n\nOur Products. Product Filter. Industry Filter Any Industry Acoustic Cleaners Composite Construction Flooring Foam and Upholstery Industrial Joinery Marine Roofing Tools Transportation. Substrate Filter Any Substrate Aluminum Brick Carpet Carpet Tile Concrete Cork Drywall Fabric Fiberglass Infusion Flexi-Ply Foam FRL FRP Reefer Liners FRP/GRP\n\n•", null, "### Tensors and Tensor Products for Physicists\n\n2007-1-10 · in which they arise in physics. The word tensor is ubiquitous in physics (stress ten-sor moment of inertia tensor field tensor metric tensor tensor product etc. etc.) and yet tensors are rarely defined carefully (if at all) and the definition usually has to do with transformation properties making it difficult to get a feel for these ob-\n\n•", null, "### Tensor productEncyclopedia of Mathematics\n\n2018-7-23 · Tensor product of two unitary modules. V tensor W to V tensor W in the basis consisting of the tensor products of the basis vectors. Tensor product of\n\n•", null, "### Derived Tensor Products and Their Applications IntechOpen\n\n2019-12-9 · Fundaments of derived tensor products. We consider the Abelian category Ab which is conformed by all functor images that are contravariant additive functors F A → Ab on small category of Z(A). Likewise Z(A) is the category of all additive pre-sheaves on A. Likewise we can define this category as of points space\n\n•", null, "### Part III. Tensor Products. KernelsScienceDirect\n\nSuch tensor products carry the locally convex spaces which arise by completion of the tensor products and called \"topologized.\" In any representation of a vector space as a tensor product the first feature that strikes is that of a certain splitting. Splitting of the tensor product type is common in algebra.\n\n•", null, "### Definition and properties of tensor products\n\nThe concept of tensor products can be used to address these problems. Us-ing tensor products one can construct operations on two-dimensional functionswhich inherit properties of one-dimensional operations. Tensor products alsoturn out to be computationally efficient.\n\n•", null, "### Lecture 2 Quantum Algorithms 1 Tensor Products\n\n2013-2-16 · A basis for the tensor product space consists of the vectors vi ⊗wj 1 ≤ i ≤ n 1 ≤ j ≤ m and thus a general element of V ⊗W is of the form ∑ i j αijvi ⊗wj This definition extends analogously to tensor products with more than two terms. The tensor product space is also a Hilbert space with the inherited inner product\n\n•", null, "### Tensor products of matrix factorizations Nagoya\n\n•", null, "2010-5-22 · On irreducibility of tensor products of Yangian modules associated with skew Young diagrams. Duke Math. J. 112 343–378 (2002) MATH Article MathSciNet Google Scholar 31. Reshetikhin N. Turaev V. Invariants of 3-manifolds via link polynomials and quantum groups. Invent. Math. 103(3) 547–597 (1991) MATH\n\n•", null, "### Definition and properties of tensor products\n\nas tensor products we need of course that the molecule is a rank 1 matrix since matrices which can be written as a tensor product always have rank 1. The tensor product can be expressed explicitly in terms of matrix products. Theorem 7.5. If S RM → RM and T RN → RN are matrices the action\n\n•", null, "### tensor product of matricesMathOverflow\n\n2021-6-5 · 1 Answer1. Darij s first comment could be made into an answer as follows. where the second equation follows from functoriality of the tensor product. Here both A ⊗ I m and I n ⊗ B are square matrices of size m n m n. Since the determinant from such matrices to the scalar field is a monoid homomorphism the determinant of the last\n\n•", null, "### Tensors and Tensor Products for Physicists\n\n2007-1-10 · in which they arise in physics. The word tensor is ubiquitous in physics (stress ten-sor moment of inertia tensor field tensor metric tensor tensor product etc. etc.) and yet tensors are rarely defined carefully (if at all) and the definition usually has to do with transformation properties making it difficult to get a feel for these ob-\n\n•", null, "Tensor products If and are finite dimensional vector spaces then the Cartesian product is naturally a vector space called the direct sum of and and denoted . The tensor product is a more complicated object. To define it we start by defining for any set the free vector space over .\n\n•", null, "### Derived Tensor Products and Their Applications IntechOpen\n\n2019-12-9 · Fundaments of derived tensor products. We consider the Abelian category Ab which is conformed by all functor images that are contravariant additive functors F A → Ab on small category of Z(A). Likewise Z(A) is the category of all additive pre-sheaves on A. Likewise we can define this category as of points space\n\n•", null, "### Introduction Tensor Products of Linear Maps\n\n2021-4-3 · Continuing our study of tensor products we will see how to combine two linear maps M M0and N N0into a linear map M RN M0 RN0. This leads to at modules and linear maps between base extensions. Then we will look at special features of tensor products of vector spaces (including contraction) the tensor products of R-algebras and\n\n•", null, "### 221A Lecture NotesHitoshi Murayama\n\n2014-1-31 · 3 Tensor Product The word \"tensor product\" refers to another way of constructing a big vector space out of two (or more) smaller vector spaces. You can see that the spirit of the word \"tensor\" is there. It is also called Kronecker product or direct product. 3.1 Space You start with two vector spaces V that is n-dimensional and W that\n\n•", null, "### Math 395. Tensor products and bases V F. Recall that a\n\n2006-7-16 · Math 395. Tensor products and bases Let V and V0 be finite-dimensional vector spaces over a field F. Recall that a tensor product of V and V0 is a pait (T t) consisting of a vector space T over F and a bilinear pairing t V V0 → T with the following universal property for any bilinear pairing B V V0 → W to any vector space W over F there exists a unique linear map L T → W\n\n•", null, "### LECTURE 17 PROPERTIES OF TENSOR PRODUCTS Theorem.\n\n2015-7-14 · LECTURE 17 PROPERTIES OF TENSOR PRODUCTS 3 This gives us a new operation on matrices tensor product. De nition. If A2M mk and B2M n then A Bis the block matrix with m k blocks of size n and where the ijblock is a ijB. That this is a nice operation will follow from our properties of tensor products.\n\n•", null, "### Tensor product surfacesUniversity of Illinois Urbana\n\n2015-10-26 · Tensor product surfaces • Usually domain is rectangular • until further notice all domains are rectangular. • Classical tensor product interpolate • Gouraud shading on a rectangle • this gives a bilinear interpolate of the rectangles vertex values. • Continuity constraints for surfaces are more interesting than for curves • Our curves have form\n\n•", null, "2010-5-22 · On irreducibility of tensor products of Yangian modules associated with skew Young diagrams. Duke Math. J. 112 343–378 (2002) MATH Article MathSciNet Google Scholar 31. Reshetikhin N. Turaev V. Invariants of 3-manifolds via link polynomials and quantum groups. Invent. Math. 103(3) 547–597 (1991) MATH\n\n•", null, "### Tensor product surfacesUniversity of Illinois Urbana\n\n2015-10-26 · Tensor product surfaces • Usually domain is rectangular • until further notice all domains are rectangular. • Classical tensor product interpolate • Gouraud shading on a rectangle • this gives a bilinear interpolate of the rectangles vertex values. • Continuity constraints for surfaces are more interesting than for curves • Our curves have form\n\n•", null, "### TENSOR PRODUCTS Introduction R e f ij c e f\n\n2021-6-9 · Tensor products rst arose for vector spaces and this is the only setting where they occur in physics and engineering so we ll describe tensor products of vector spaces rst. Let V and W be vector spaces over a eld K and choose bases fe igfor V and ff jgfor W. The tensor product V\n\n•", null, "### Tensor productEncyclopedia of Mathematics\n\n2018-7-23 · Tensor product of two unitary modules. V tensor W to V tensor W in the basis consisting of the tensor products of the basis vectors. Tensor product of\n\n•", null, "### Tensor product surfacesUniversity of Illinois Urbana\n\n2015-10-26 · Tensor product surfaces • Usually domain is rectangular • until further notice all domains are rectangular. • Classical tensor product interpolate • Gouraud shading on a rectangle • this gives a bilinear interpolate of the rectangles vertex values. • Continuity constraints for surfaces are more interesting than for curves • Our curves have form\n\n•", null, "### Notes on Tensor Products and the Exterior Algebra\n\n2012-12-19 · to work with tensor products in a practical way. Later we ll show that such a space actually exists by constructing it. De nition 1.1. Let V 1V 2 be vector spaces over a eld F. A pair (Y ) where Y is a vector space over F and V 1 V 2 Y is a bilinear map is called the tensor product of V 1 and V 2 if the following condition holds\n\n•", null, "### Tensor products» Department of Mathematics\n\n2011-4-5 · Tensor products Joel Kamnitzer April 5 2011 1 The definition Let V W X be three vector spaces. A bilinear map from V W to X is a function H V W → X such that\n\n•", null, "### Lecture 24 Tensor Product StatesMichigan State\n\n2009-11-13 · Tensor-product spaces •The most general form of an operator in H 12 is –Here m n〉 may or may not be a tensor product state. The important thing is that it takes two quantum numbers to specify a basis state in H 12 •A basis that is not formed from tensor-product states is an entangled-state basis •In the beginning you should\n\n•", null, "### Part III. Tensor Products. KernelsScienceDirect\n\nSuch tensor products carry the locally convex spaces which arise by completion of the tensor products and called \"topologized.\" In any representation of a vector space as a tensor product the first feature that strikes is that of a certain splitting. Splitting of the tensor product type is common in algebra.\n\n•", null, "Tensor products If and are finite dimensional vector spaces then the Cartesian product is naturally a vector space called the direct sum of and and denoted . The tensor product is a more complicated object. To define it we start by defining for any set the free vector space over .\n\n•", null, "### Tensor product surfacesUniversity of Illinois Urbana\n\n2015-10-26 · Tensor product surfaces • Usually domain is rectangular • until further notice all domains are rectangular. • Classical tensor product interpolate • Gouraud shading on a rectangle • this gives a bilinear interpolate of the rectangles vertex values. • Continuity constraints for surfaces are more interesting than for curves • Our curves have form\n\n•", null, "### 221A Lecture NotesHitoshi Murayama\n\n2014-1-31 · The word \"tensor product\" refers to another way of constructing a big vectorspace out of two (or more) smaller vector spaces. You can see that the spiritof the word \"tensor\" is there. It is also called Kronecker product or directproduct. 3.1 Space\n\n•", null, "### Vector Space Tensor Product -- from Wolfram MathWorld\n\n2021-7-19 · Using tensor products one can define symmetric tensors antisymmetric tensors as well as the exterior algebra. Moreover the tensor product is generalized to the vector bundle tensor product. In particular tensor products of the tangent bundle and its dual bundle are studied in\n\n•", null, "### LECTURE 17 PROPERTIES OF TENSOR PRODUCTS Theorem.\n\n2015-7-14 · LECTURE 17 PROPERTIES OF TENSOR PRODUCTS 3 This gives us a new operation on matrices tensor product. De nition. If A2M mk and B2M n then A Bis the block matrix with m k blocks of size n and where the ijblock is a ijB. That this is a nice operation will follow from our properties of tensor products.\n\n•", null, "### Tensor productsmath\n\n2018-3-17 · Tensor products Let Rbe a commutative ring. Given R-modules M 1 M 2 and Nwe say that a map b M 1 M 2 N is R-bilinear if for all r r02Rand module elements m i m0 i 2M i we have b(rm 1 r0m0 1m 2) = rb(m 1m 2) r 0b(m0 1m 2) b(m 1rm 2 r0m0 2) = rb(m 1m 2) r0b(m 1m0 2) The set of all such R-bilinear maps is denoted by Bilin\n\n•", null, "### Lecture 24 Tensor Product StatesMichigan State\n\n2009-11-13 · •A tensor-product state is of the form –Tensor-product states are called factorizable •The most general state is –This may or may-not be factorizable\n\n•", null, "### 221A Lecture NotesHitoshi Murayama\n\n2014-1-31 · 3 Tensor Product The word \"tensor product\" refers to another way of constructing a big vector space out of two (or more) smaller vector spaces. You can see that the spirit of the word \"tensor\" is there. It is also called Kronecker product or direct product. 3.1 Space You start with two vector spaces V that is n-dimensional and W that\n\n•", null, "### Notes on Tensor Products and the Exterior Algebra\n\n2012-12-19 · to work with tensor products in a practical way. Later we ll show that such a space actually exists by constructing it. De nition 1.1. Let V 1V 2 be vector spaces over a eld F. A pair (Y ) where Y is a vector space over F and V 1 V 2 Y is a bilinear map is called the tensor product of V 1 and V 2 if the following condition holds\n\n•", null, "### Composite Systems and Tensor Products\n\n2005-1-8 · unit cell. In such cases one can construct the tensor product spaces in a straightforward manner using the principles described below. 6.2 De nition of tensor products Given two Hilbert spaces A and B their tensor product A B can be de ned in the following way where we assume for simplicity that the spaces are nite-dimensional.\n\n•", null, "### LECTURE 17 PROPERTIES OF TENSOR PRODUCTS Theorem.\n\n2015-7-14 · LECTURE 17 PROPERTIES OF TENSOR PRODUCTS 3 This gives us a new operation on matrices tensor product. De nition. If A2M mk and B2M n then A Bis the block matrix with m k blocks of size n and where the ijblock is a ijB. That this is a nice operation will follow from our properties of tensor products.\n\n•", null, "### 27. Tensor productsUniversity of Minnesota\n\n2009-2-5 · tensor products by mapping properties. This will allow us an easy proof that tensor products (if they exist) are unique up to unique isomorphism. Thus whatever construction we contrive must inevitably yield the same (or better equivalent) object. Then we give a modern construction. A tensor product of R-modules M Nis an R-module denoted M" ]
[ null, "https://www.karo-bhz.pl/images/img/70.jpg", null, "https://www.karo-bhz.pl/images/img/18.jpg", null, "https://www.karo-bhz.pl/images/img/56.jpg", null, "https://www.karo-bhz.pl/images/img/36.jpg", null, "https://www.karo-bhz.pl/images/img/37.jpg", null, "https://www.karo-bhz.pl/images/img/68.jpg", null, "https://www.karo-bhz.pl/images/img/46.jpg", null, "https://www.karo-bhz.pl/images/img/50.jpg", null, "https://www.karo-bhz.pl/images/img/36.jpg", null, "https://www.karo-bhz.pl/images/img/53.jpg", null, "https://www.karo-bhz.pl/images/img/16.jpg", null, "https://www.karo-bhz.pl/images/img/73.jpg", null, "https://www.karo-bhz.pl/images/img/41.jpg", null, "https://www.karo-bhz.pl/images/img/18.jpg", null, "https://www.karo-bhz.pl/images/img/48.jpg", null, "https://www.karo-bhz.pl/images/img/75.jpg", null, "https://www.karo-bhz.pl/images/img/73.jpg", null, "https://www.karo-bhz.pl/images/img/56.jpg", null, "https://www.karo-bhz.pl/images/img/13.jpg", null, "https://www.karo-bhz.pl/images/img/7.jpg", null, "https://www.karo-bhz.pl/images/img/9.jpg", null, "https://www.karo-bhz.pl/images/img/66.jpg", null, "https://www.karo-bhz.pl/images/img/13.jpg", null, "https://www.karo-bhz.pl/images/img/35.jpg", null, "https://www.karo-bhz.pl/images/img/52.jpg", null, "https://www.karo-bhz.pl/images/img/4.jpg", null, "https://www.karo-bhz.pl/images/img/58.jpg", null, "https://www.karo-bhz.pl/images/img/66.jpg", null, "https://www.karo-bhz.pl/images/img/72.jpg", null, "https://www.karo-bhz.pl/images/img/46.jpg", null, "https://www.karo-bhz.pl/images/img/63.jpg", null, "https://www.karo-bhz.pl/images/img/62.jpg", null, "https://www.karo-bhz.pl/images/img/22.jpg", null, "https://www.karo-bhz.pl/images/img/10.jpg", null, "https://www.karo-bhz.pl/images/img/4.jpg", null, "https://www.karo-bhz.pl/images/img/62.jpg", null, "https://www.karo-bhz.pl/images/img/15.jpg", null, "https://www.karo-bhz.pl/images/img/72.jpg", null, "https://www.karo-bhz.pl/images/img/21.jpg", null, "https://www.karo-bhz.pl/images/img/14.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87172467,"math_prob":0.8401873,"size":12352,"snap":"2022-05-2022-21","text_gpt3_token_len":2979,"char_repetition_ratio":0.18253969,"word_repetition_ratio":0.5521024,"special_character_ratio":0.23955634,"punctuation_ratio":0.05278842,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96323735,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-24T22:19:23Z\",\"WARC-Record-ID\":\"<urn:uuid:7a483a42-8da8-4fa8-b360-d90d1ef1d3ee>\",\"Content-Length\":\"23703\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25abe85f-d28e-4c46-90ad-b15a7448e132>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a5e4955-3118-40ac-bd0b-e99dde4b6d37>\",\"WARC-IP-Address\":\"104.21.28.238\",\"WARC-Target-URI\":\"https://www.karo-bhz.pl/May-03/4001.html\",\"WARC-Payload-Digest\":\"sha1:3EKFGRLU4EVP2JZL7BSEM5MZMXE3L7VY\",\"WARC-Block-Digest\":\"sha1:BQ7BIVPM4OM5GCJUJ6VMJQLOKWP5GJCA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662577259.70_warc_CC-MAIN-20220524203438-20220524233438-00574.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/43923/applying-rules-to-rewrite-expressions-that-contain-patterns/43928
[ "# Applying rules to rewrite expressions that contain patterns\n\nI'm trying to write a rule that can rewrite a pattern that exists in another expression. For some reason, this is proving to be more challenging than I expected ... most likely because I'm not totally understanding how Mathematica implements its pattern matching mechanism.\n\nHere's my simple test case:\n\nfoo = Foo[a: {A,B,C}, b: {a,b,c}] (* FullForm yields: Foo[Pattern[a,A],Pattern[b,B]] *)\n\n\nI want to rewrite the portion b : {a,b,c} with some other expression, like Reverse[<b>] which should yield: b : {c,b,a}.\n\nHere's what I've tried to far (none of which works):\n\nfoo /. (b:X_) -> Reverse[X] (* #1 produces Foo[b: {a,b,c}, a: {A,B,C}] *)\n\nfoo /. Hold[(b : B_)] -> Reverse[B] (* #2 no effect *)\n\nfoo /. HoldForm[(b : B_)] -> Reverse[B] (* #3 no effect *)\n\nfoo /. HoldPattern[(b : B_)] -> Reverse[B] (* #4 same as case #1 *)\n\nfoo /. (b : B_List) -> Reverse[B] (* #5 reverses both a:{} and b:{} *)\n\n• You may also be interested in expression parsers. Two examples which come to mind are here and here (function depends). – Leonid Shifrin Mar 13 '14 at 11:01\n\nfoo /. HoldPattern[Pattern][Verbatim[b], x_] :> Pattern[b, Reverse@x]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8374695,"math_prob":0.6892766,"size":951,"snap":"2020-45-2020-50","text_gpt3_token_len":280,"char_repetition_ratio":0.1404435,"word_repetition_ratio":0.030303031,"special_character_ratio":0.36908516,"punctuation_ratio":0.22685185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96551025,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T12:25:42Z\",\"WARC-Record-ID\":\"<urn:uuid:cd5e7994-285b-498e-9b3e-0f424d46b609>\",\"Content-Length\":\"149977\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c41fe93-4109-4953-8401-ba11bfa0bbdb>\",\"WARC-Concurrent-To\":\"<urn:uuid:5596be94-c79e-41d8-b989-c34a1dc04dae>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/43923/applying-rules-to-rewrite-expressions-that-contain-patterns/43928\",\"WARC-Payload-Digest\":\"sha1:3EEAIMCDVGUKBF4Q7XTZ3HUNOHACV4WY\",\"WARC-Block-Digest\":\"sha1:WSY2Z6CSZKJTM4PBFXWNGDZJYQX74PKD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107891228.40_warc_CC-MAIN-20201026115814-20201026145814-00172.warc.gz\"}"}
https://socoder.net/?Topic=8518
[ "-=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- (c) WidthPadding Industries 1987 0|151|0 -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- Socoder -> Health Matters -> Bad Food Fri, 17 Dec 2021, 05:18 steve_ancell", null, "Don't usually like spewing my guts up but can say I really fucking needed that, never eating fuck all from Lidl again! Fri, 17 Dec 2021, 05:18 Jayenkai", null, "I think there might be a bug going round. Both me and Mum have had extreme vomittyness, last week.. -=-=- ''Load, Next List!''", null, "Fri, 17 Dec 2021, 05:25 steve_ancell", null, "I'm pretty sure I spewed last time I had a ready meal from that place, this was the first time I've been there within a couple of years too. Fri, 17 Dec 2021, 07:13 Kuron", null, "Hoping you feel better soon Steve, and sorry to hear about Jay and Mum. -=-=- Live Strong. Act Bold. Be Brave. Nothing's Too Hard To Do. Never Give Up. ALWAYS BELIEVE. -- Warrior Fri, 17 Dec 2021, 07:24 cyangames", null, "Eurghh, ready meals... I had the same issue with takeaway from a place near me. Tried it 2wice, both times it tasted really nice, but nope, blerugghhh... -=-=- Web / Game Dev, occasionally finishes off coding games also! Fri, 17 Dec 2021, 11:43 steve_ancell", null, "Had a few hours sleep until Beaky and Squeak flew upstairs to sit on me, feel a whole lot better now apart from a bit of a sore throat after spew burned it." ]
[ null, "https://socoder.com/uploads/51/favicon_original.png", null, "https://socoder.net/uploads/1/avatar_201402.png", null, "https://socoder.net/s2css/blank.png", null, "https://socoder.com/uploads/51/favicon_original.png", null, "https://socoder.net/avabar.php", null, "https://socoder.net/uploads/1005/2023/19/6464.png", null, "https://socoder.com/uploads/51/favicon_original.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93912584,"math_prob":0.9630144,"size":1221,"snap":"2023-40-2023-50","text_gpt3_token_len":384,"char_repetition_ratio":0.12078883,"word_repetition_ratio":0.017391304,"special_character_ratio":0.32514334,"punctuation_ratio":0.1875,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.95725316,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T14:35:29Z\",\"WARC-Record-ID\":\"<urn:uuid:850381fb-f8f3-4f63-bea6-05e0b5f6215f>\",\"Content-Length\":\"25508\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e61d2c2-1a1b-45e5-af38-45c94fc6853c>\",\"WARC-Concurrent-To\":\"<urn:uuid:db18d34a-b95e-4e97-96e8-60895067a082>\",\"WARC-IP-Address\":\"88.80.187.139\",\"WARC-Target-URI\":\"https://socoder.net/?Topic=8518\",\"WARC-Payload-Digest\":\"sha1:JXOCML5S4IVYMPCTAGLEWD35JL6QDQHQ\",\"WARC-Block-Digest\":\"sha1:T6TFNNMFP5YCQQ2WKIIYJTOHI4IH3QDL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100599.20_warc_CC-MAIN-20231206130723-20231206160723-00222.warc.gz\"}"}
https://bettersolutions.com/excel/functions/info-function.htm
[ "### INFO(type_text)\n\nReturns the text string returning useful information about the environment.\n\n type_text The text specifying what type of information you want returned:\"directory\" = the path of the current directory or folder\"numfile\" = the number of active worksheets in the open workbooks\"origin\" = the first visible cell in upper left corner\"osversion\" = the current operating system version\"recalc\" = the current recalcation mode, either automatic or manual\"release\" = the current release version of Microsoft Excel\"system\" = the current name of the operating system\"memavail\" (Removed in 2007) = the amount of memory available in bytes\"memused\" (Removed in 2007) = the amount of memory currently being used in bytes\"totmem\" (Removed in 2007) = the total amount of memory in bytes\n\n#### Remarks\n\n * This function is Volatile and will change everytime a cell on the worksheet is calculated.* If \"type_text\" = \"numfile\", then will return the total number of worksheets from all the workbooks that are currently open, including add-ins and hidden workbooks like Personal.xlsb.* If \"type_text\" = \"origin\", then the cell reference of the top leftmost cell visible in the current window, based on the current scrolling position.* If \"type_text\" = \"system\", then a Windows operating system will return \"pcdocs\" and a Macintosh system will return \"mac\".* For the Microsoft documentation refer to support.microsoft.com\n\n A 1 =INFO(\"directory\") = C:\\Temp\\ 2 =INFO(\"numfile\") = 31 3 =INFO(\"origin\") = \\$A:\\$A\\$465 4 =INFO(\"osversion\") = Windows (64-bit) NT 10.00 5 =INFO(\"recalc\") = Automatic 6 =INFO(\"release\") = 16.0 7 =INFO(\"system\") = pcdos" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80255914,"math_prob":0.8290178,"size":1509,"snap":"2021-31-2021-39","text_gpt3_token_len":340,"char_repetition_ratio":0.13953489,"word_repetition_ratio":0.03846154,"special_character_ratio":0.22995362,"punctuation_ratio":0.07083333,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95535934,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T13:30:17Z\",\"WARC-Record-ID\":\"<urn:uuid:c7843338-b034-40a4-b021-fb3ba3a39ccb>\",\"Content-Length\":\"19590\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:effccf0e-49d9-45b4-a2a3-f10b899fc5ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:b9ef8f84-608b-4c66-b3b6-96ab1817f352>\",\"WARC-IP-Address\":\"160.153.155.173\",\"WARC-Target-URI\":\"https://bettersolutions.com/excel/functions/info-function.htm\",\"WARC-Payload-Digest\":\"sha1:WZV25PDOO3DGGJHKXUD4SWIBOIXF4ZG5\",\"WARC-Block-Digest\":\"sha1:PXK3HPZU5VDBKWAFBOSBABRD7OM3GJUM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060803.2_warc_CC-MAIN-20210928122846-20210928152846-00492.warc.gz\"}"}
http://ibetov.info/water-flow-rates/water-flow-rates-related-post-calculation-of-water-flow-rates-through-different-size-pipes-water-flow-rates-for-pigs/
[ "Water Flow Rates Related Post Calculation Of Water Flow Rates Through Different Size Pipes Water Flow Rates For Pigs\n\n# Water Flow Rates Related Post Calculation Of Water Flow Rates Through Different Size Pipes Water Flow Rates For Pigs", null, "water flow rates related post calculation of water flow rates through different size pipes water flow rates for pigs.\n\nwater flow calculator for pipe size rate diameter vs impeller meter dry dial with and,water flow rates for showers 1 rate l min plastic hall sensor meter gpm through pipe vs size,wonderful garden hose flow rate fluid mass water rates for pipe sizes through copper diameter pressure,water flow rate pipe calculator taps rates through 1 inch meter,how to design a pump system water flow calculation through pipe rates different sizes rate diameter,water liquid flow rate sensor kits kart gpm through pipe excel formulas to calculate rates for different sizes copper,digital target smart water flow rate meter gas regulator price buy pipe diameter pressure calculation of rates through different size pipes,water flow rates for showers rate pipe diameter pressure chapter 5 mass and energy analysis of control volumes video capacity in pvc,water flow rates through pipe capacity in pvc calculator 9 pressure,water flow calculation through pipe capacity in pvc rate diameter pressure figure 2 cooling performance versus.\n\n## Comments\n\nNo Comments Yet. Be the first?" ]
[ null, "http://ibetov.info/wp-content/uploads/2019/03/water-flow-rates-related-post-calculation-of-water-flow-rates-through-different-size-pipes-water-flow-rates-for-pigs.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8262664,"math_prob":0.99886835,"size":1174,"snap":"2019-13-2019-22","text_gpt3_token_len":217,"char_repetition_ratio":0.22820513,"word_repetition_ratio":0.011235955,"special_character_ratio":0.16950597,"punctuation_ratio":0.063725494,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97190994,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-24T04:46:11Z\",\"WARC-Record-ID\":\"<urn:uuid:0350fa83-0fe7-45fe-9b2a-feff3a73074c>\",\"Content-Length\":\"59502\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:645ca7f4-a01f-4b0d-8e68-57af32bc463f>\",\"WARC-Concurrent-To\":\"<urn:uuid:07b2d126-8121-4031-8396-b3dcecc8747b>\",\"WARC-IP-Address\":\"104.27.150.95\",\"WARC-Target-URI\":\"http://ibetov.info/water-flow-rates/water-flow-rates-related-post-calculation-of-water-flow-rates-through-different-size-pipes-water-flow-rates-for-pigs/\",\"WARC-Payload-Digest\":\"sha1:X5RLILA7MZWNTRDSR2YKQGLNYJBTFYRO\",\"WARC-Block-Digest\":\"sha1:BPBDVE256BPAIR35UY7O6ER2VBQ6QO2O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257514.68_warc_CC-MAIN-20190524044320-20190524070320-00416.warc.gz\"}"}
https://www.colorhexa.com/0d199c
[ "# #0d199c Color Information\n\nIn a RGB color space, hex #0d199c is composed of 5.1% red, 9.8% green and 61.2% blue. Whereas in a CMYK color space, it is composed of 91.7% cyan, 84% magenta, 0% yellow and 38.8% black. It has a hue angle of 235 degrees, a saturation of 84.6% and a lightness of 33.1%. #0d199c color hex could be obtained by blending #1a32ff with #000039. Closest websafe color is: #000099.\n\n• R 5\n• G 10\n• B 61\nRGB color chart\n• C 92\n• M 84\n• Y 0\n• K 39\nCMYK color chart\n\n#0d199c color description : Dark blue.\n\n# #0d199c Color Conversion\n\nThe hexadecimal color #0d199c has RGB values of R:13, G:25, B:156 and CMYK values of C:0.92, M:0.84, Y:0, K:0.39. Its decimal value is 858524.\n\nHex triplet RGB Decimal 0d199c `#0d199c` 13, 25, 156 `rgb(13,25,156)` 5.1, 9.8, 61.2 `rgb(5.1%,9.8%,61.2%)` 92, 84, 0, 39 235°, 84.6, 33.1 `hsl(235,84.6%,33.1%)` 235°, 91.7, 61.2 000099 `#000099`\nCIE-LAB 20.753, 46.188, -69.217 6.513, 3.181, 31.721 0.157, 0.077, 3.181 20.753, 83.213, 303.715 20.753, -6.325, -74.655 17.834, 33.979, -92.973 00001101, 00011001, 10011100\n\n# Color Schemes with #0d199c\n\n• #0d199c\n``#0d199c` `rgb(13,25,156)``\n• #9c900d\n``#9c900d` `rgb(156,144,13)``\nComplementary Color\n• #0d609c\n``#0d609c` `rgb(13,96,156)``\n• #0d199c\n``#0d199c` `rgb(13,25,156)``\n• #490d9c\n``#490d9c` `rgb(73,13,156)``\nAnalogous Color\n• #619c0d\n``#619c0d` `rgb(97,156,13)``\n• #0d199c\n``#0d199c` `rgb(13,25,156)``\n• #9c490d\n``#9c490d` `rgb(156,73,13)``\nSplit Complementary Color\n• #199c0d\n``#199c0d` `rgb(25,156,13)``\n• #0d199c\n``#0d199c` `rgb(13,25,156)``\n• #9c0d19\n``#9c0d19` `rgb(156,13,25)``\n• #0d9c90\n``#0d9c90` `rgb(13,156,144)``\n• #0d199c\n``#0d199c` `rgb(13,25,156)``\n• #9c0d19\n``#9c0d19` `rgb(156,13,25)``\n• #9c900d\n``#9c900d` `rgb(156,144,13)``\n• #070e55\n``#070e55` `rgb(7,14,85)``\n• #09116d\n``#09116d` `rgb(9,17,109)``\n• #0b1584\n``#0b1584` `rgb(11,21,132)``\n• #0d199c\n``#0d199c` `rgb(13,25,156)``\n• #0f1db4\n``#0f1db4` `rgb(15,29,180)``\n• #1121cb\n``#1121cb` `rgb(17,33,203)``\n• #1324e3\n``#1324e3` `rgb(19,36,227)``\nMonochromatic Color\n\n# Alternatives to #0d199c\n\nBelow, you can see some colors close to #0d199c. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #0d3d9c\n``#0d3d9c` `rgb(13,61,156)``\n• #0d319c\n``#0d319c` `rgb(13,49,156)``\n• #0d259c\n``#0d259c` `rgb(13,37,156)``\n• #0d199c\n``#0d199c` `rgb(13,25,156)``\n• #0d0d9c\n``#0d0d9c` `rgb(13,13,156)``\n• #190d9c\n``#190d9c` `rgb(25,13,156)``\n• #250d9c\n``#250d9c` `rgb(37,13,156)``\nSimilar Colors\n\n# #0d199c Preview\n\nThis text has a font color of #0d199c.\n\n``<span style=\"color:#0d199c;\">Text here</span>``\n#0d199c background color\n\nThis paragraph has a background color of #0d199c.\n\n``<p style=\"background-color:#0d199c;\">Content here</p>``\n#0d199c border color\n\nThis element has a border color of #0d199c.\n\n``<div style=\"border:1px solid #0d199c;\">Content here</div>``\nCSS codes\n``.text {color:#0d199c;}``\n``.background {background-color:#0d199c;}``\n``.border {border:1px solid #0d199c;}``\n\n# Shades and Tints of #0d199c\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #01020b is the darkest color, while #f8f9fe is the lightest one.\n\n• #01020b\n``#01020b` `rgb(1,2,11)``\n• #02051d\n``#02051d` `rgb(2,5,29)``\n• #04082f\n``#04082f` `rgb(4,8,47)``\n• #050a41\n``#050a41` `rgb(5,10,65)``\n• #070d54\n``#070d54` `rgb(7,13,84)``\n• #081066\n``#081066` `rgb(8,16,102)``\n• #0a1378\n``#0a1378` `rgb(10,19,120)``\n• #0b168a\n``#0b168a` `rgb(11,22,138)``\n• #0d199c\n``#0d199c` `rgb(13,25,156)``\n• #0f1cae\n``#0f1cae` `rgb(15,28,174)``\n• #101fc0\n``#101fc0` `rgb(16,31,192)``\n• #1222d2\n``#1222d2` `rgb(18,34,210)``\n• #1325e4\n``#1325e4` `rgb(19,37,228)``\n• #1f30ec\n``#1f30ec` `rgb(31,48,236)``\n• #3141ee\n``#3141ee` `rgb(49,65,238)``\n• #4351ef\n``#4351ef` `rgb(67,81,239)``\n• #5562f1\n``#5562f1` `rgb(85,98,241)``\n• #6773f2\n``#6773f2` `rgb(103,115,242)``\n• #7984f4\n``#7984f4` `rgb(121,132,244)``\n• #8b94f5\n``#8b94f5` `rgb(139,148,245)``\n• #9ea5f7\n``#9ea5f7` `rgb(158,165,247)``\n• #b0b6f8\n``#b0b6f8` `rgb(176,182,248)``\n• #c2c6fa\n``#c2c6fa` `rgb(194,198,250)``\n• #d4d7fb\n``#d4d7fb` `rgb(212,215,251)``\n• #e6e8fd\n``#e6e8fd` `rgb(230,232,253)``\n• #f8f9fe\n``#f8f9fe` `rgb(248,249,254)``\nTint Color Variation\n\n# Tones of #0d199c\n\nA tone is produced by adding gray to any pure hue. In this case, #4e4f5b is the less saturated color, while #000ea9 is the most saturated one.\n\n• #4e4f5b\n``#4e4f5b` `rgb(78,79,91)``\n• #484a62\n``#484a62` `rgb(72,74,98)``\n• #414468\n``#414468` `rgb(65,68,104)``\n• #3b3f6f\n``#3b3f6f` `rgb(59,63,111)``\n• #343975\n``#343975` `rgb(52,57,117)``\n• #2e347c\n``#2e347c` `rgb(46,52,124)``\n• #272f82\n``#272f82` `rgb(39,47,130)``\n• #212989\n``#212989` `rgb(33,41,137)``\n• #1a248f\n``#1a248f` `rgb(26,36,143)``\n• #141e96\n``#141e96` `rgb(20,30,150)``\n• #0d199c\n``#0d199c` `rgb(13,25,156)``\n• #0714a3\n``#0714a3` `rgb(7,20,163)``\n• #000ea9\n``#000ea9` `rgb(0,14,169)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0d199c is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5287075,"math_prob":0.70951545,"size":3658,"snap":"2020-10-2020-16","text_gpt3_token_len":1671,"char_repetition_ratio":0.12397373,"word_repetition_ratio":0.011111111,"special_character_ratio":0.56287587,"punctuation_ratio":0.23516238,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9875373,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-18T07:19:22Z\",\"WARC-Record-ID\":\"<urn:uuid:98047c90-2560-4870-9233-e67badfb2fba>\",\"Content-Length\":\"36234\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2ef31abe-742c-4803-ac96-eb035616c1ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:1e45a3db-8452-405e-bb69-6b678f3936a3>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0d199c\",\"WARC-Payload-Digest\":\"sha1:SQ4MMEHYTEUHGGMZFYBBSTU6HIULIM4U\",\"WARC-Block-Digest\":\"sha1:4TEKX7YYRMJCFE6TLEBS2Z2U7BECDUJL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875143635.54_warc_CC-MAIN-20200218055414-20200218085414-00304.warc.gz\"}"}
https://my.solabuto.ru/algotreyding/raduzhnyiy-grafik/
[ "# RAINBOW GRAPH", null, "Type: Function, Name: rainbow\n\nInputs: Price(Numeric), Length(Numeric), Level(Numeric);\nVars: LVLAvg(0);\nArray: Avg(0);\n\nAvg = Average(Price, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\n\nFor value1 = 1 To 10 Begin\nIF value1 = Level Then\nLVLAvg = Avg[value1];\nEnd;\n\nRainbow = LVLAvg;\n\nType: Function, Name: RainbowBW\n\nInputs: Price(Numeric), Length(Numeric), Level(Numeric);\nVars: LVLAvg(0), HiPrice(0), LoPrice(0), HiAvg(0), LoAvg(0);\nArray: Avg(0);\n\nAvg = Average(Price, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\n\nHiPrice = Highest(Price, Level);\nLoPrice = Lowest(Price, Level);\nHiAvg = Avg;\nLoAvg = Avg;\nFor value1 = 2 To Level Begin\nIF Avg[value1] > HiAvg Then\nHiAvg = Avg[value1];\nIF Avg[value1] < LoAvg Then\nLoAvg = Avg[value1];\nEnd;\n\nIF HiPrice — LoPrice <> 0 Then Begin\nIF Price > HiAvg Then\nHiAvg = Price;\nIF Price < LoAvg Then\nLoAvg = Price;\nRainbowBW = 100 * ((HiAvg — LoAvg) / (HiPrice — LoPrice));\nEnd;\n\nType: Function, Name: RainbowOsc\n\nInputs: Price(Numeric), Length(Numeric), Level(Numeric);\nVars: AvgAvgs(0), HiPrice(0), LoPrice(0), AvgVal(0);\nArray: Avg(0);\n\nAvgAvgs = 0;\nAvg = Average(Price, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\nAvg = Average(Avg, Length);\n\nHiPrice = Highest(Price, Level);\nLoPrice = Lowest(Price, Level);\nFor value1 = 1 To Level Begin\nAvgAvgs = AvgAvgs + Avg[value1];\nEnd;\nAvgVal = AvgAvgs / Level;\n\nIF HiPrice — LoPrice <> 0 Then\nRainbowOsc = 100 * ((Close — AvgVal) / (HiPrice — LoPrice));\n\nType: Indicator, Name: Rainbow_a\n\nInputs: P(Close), Len(2);\n\nIF CurrentBar > Len * 10 Then Begin\nPlot1(Rainbow(P, Len, 10), «Avg10»);\nPlot2(Rainbow(P, Len, 9), «Avg9»);\nEnd;\n\n«Style:\n\nPlot Name Type Color Weight\nPlot1 Avg10 Line Dk Blue medium\nPlot2 Avg9 Line Dk Magenta medium\nScaling: Same as Price Data»\n\nType: Indicator, Name: Rainbow_b\n\nInputs: P(Close), Len(2);\n\nIF CurrentBar > Len *10 Then Begin\nPlot1(Rainbow(P, Len, 8), «Avg8»);\nPlot2(Rainbow(P, Len, 7), «Avg7»);\nPlot3(Rainbow(P, Len, 6), «Avg6»);\nPlot4(Rainbow(P, Len, 5), «Avg5»);\nEnd;\n\n«Style:\n\nPlot Name Type Color Weight\nPlot1 Avg8 Line Dk Green medium\nPlot2 Avg7 Line Dk Cyan medium\nPlot3 Avg6 Line Blue\nPlot4 Avg5 Line Cyan\nScaling: Same as Price Data»\n\nType: Indicator, Name: Rainbow_c\n\nInputs: P(Close), Len(2);\n\nIF CurrentBar > Len *10 Then Begin\nPlot1(Rainbow(P, Len, 4), «Avg4»);\nPlot2(Rainbow(P, Len, 3), «Avg3»);\nPlot3(Rainbow(P, Len, 2), «Avg2»);\nPlot4(Rainbow(P, Len, 1), «Avg1»);\nEnd;\n\n«Style:\n\nPlot Name Type Color Weight\nPlot1 Avg4 Line Green medium\nPlot2 Avg3 Line Yellow medium\nPlot3 Avg2 Line Magenta medium\nPlot4 Avg1 Line Red medium»", null, "Type: Indicator, Name: Rainbow Oscillator\n\nInputs: P(Close), Len(2), Levels(10);\nVars: PosNeg(0);\n\nIF CurrentBar > Len * Levels Then Begin\nPlot1(RainbowBW(P, Len, Levels), «URB»);\nPlot2(-RainbowBW(P, Len, Levels), «LRB»);\nPosNeg = RainbowOsc(P, Len, Levels);\n\nIF PosNeg > 0 Then\nPlot3(PosNeg, «RainbowOsc»)\nElse\nPlot4(PosNeg, «RainbowOsc»);\nEnd;\n\n«Style:\nPlot Name Type Color Weight\nPlot1 URB Line Red medium\nPlot2 LRB Line Blue medium\nPlot3 RainbowOsc Histogram Red medium\nPlot4 RainbowOsc Histogram Blue medium»\n\nПодписаться\nУведомить о\n0 комментариев\nМежтекстовые Отзывы\nПосмотреть все комментарии" ]
[ null, "https://my.solabuto.ru/wp-content/uploads/2018/06/rainbow.png", null, "https://my.solabuto.ru/wp-content/uploads/2018/06/Rainbow-Oscillator.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.548075,"math_prob":0.99501014,"size":3852,"snap":"2022-27-2022-33","text_gpt3_token_len":1362,"char_repetition_ratio":0.24532224,"word_repetition_ratio":0.35205993,"special_character_ratio":0.36033228,"punctuation_ratio":0.25775656,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99713975,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T02:27:01Z\",\"WARC-Record-ID\":\"<urn:uuid:a470c3f3-7c72-4eab-ba60-c75687ece6e3>\",\"Content-Length\":\"82455\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a49b026d-7d78-4596-94f2-cf041da1a189>\",\"WARC-Concurrent-To\":\"<urn:uuid:b99b06e3-a3ff-46ad-947e-569bcfd9c5d2>\",\"WARC-IP-Address\":\"193.106.93.210\",\"WARC-Target-URI\":\"https://my.solabuto.ru/algotreyding/raduzhnyiy-grafik/\",\"WARC-Payload-Digest\":\"sha1:FHM5CRKLIR24TA7KTNNIRWL6OX47UNHH\",\"WARC-Block-Digest\":\"sha1:JNUK6M54SOTTZK7FVEYABXDS2UBMP25K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103917192.48_warc_CC-MAIN-20220701004112-20220701034112-00694.warc.gz\"}"}
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-8-section-8-3-the-slope-of-a-line-exercises-page-310/34
[ "## Elementary Technical Mathematics\n\n$y=\\underset\\uparrow{\\frac{4}{5}}x-2$ $\\ \\ \\ \\ \\ \\$slope $y=-\\underset\\uparrow{\\frac{4}{5}}x+\\frac{2}{3}$ $\\ \\ \\ \\ \\ \\ \\ \\ \\ \\$slope The slopes are not equal and are not negative inverse, so the lines are neither parallel nor perpendicular." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6330995,"math_prob":0.9996593,"size":275,"snap":"2022-27-2022-33","text_gpt3_token_len":93,"char_repetition_ratio":0.19557196,"word_repetition_ratio":0.25,"special_character_ratio":0.37818182,"punctuation_ratio":0.04347826,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9666098,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T03:59:50Z\",\"WARC-Record-ID\":\"<urn:uuid:f756fc8a-6469-4088-a8a2-ba6591e7acf2>\",\"Content-Length\":\"56483\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5be84a7e-d433-469a-b191-77aa194e78ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:5ec9f542-86c3-4d63-a46f-71461225227d>\",\"WARC-IP-Address\":\"34.226.243.37\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-8-section-8-3-the-slope-of-a-line-exercises-page-310/34\",\"WARC-Payload-Digest\":\"sha1:WJTF2L4PCRF7UNLG6GB7NSKQHJKSFTP6\",\"WARC-Block-Digest\":\"sha1:D2ZQ2C7TOR23RVGDWQPA5THBUAXCHATY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104512702.80_warc_CC-MAIN-20220705022909-20220705052909-00134.warc.gz\"}"}
https://www.elitedigitalstudy.com/12059/a-manufacturer-reckons-that-the-value-of-a-machine-which-costs-him-rs-15625
[ "A manufacturer reckons that the value of a machine, which costs him Rs 15625, will depreciate each year by 20%. Find the estimated value at the end of 5 years.\n\nAsked by Pragya Singh | 1 year ago |  110\n\n##### Solution :-\n\nGiven, the cost of machine = Rs 15625\n\nAlso, given that the machine depreciates by 20% every year.\n\nHence, its value after every year is 80% of the original cost i.e., $$\\dfrac{4}{5}$$ of the original cost.\n\n$$15625\\times \\dfrac{4}{5}\\times \\dfrac{4}{5}\\times.......\\times \\dfrac{4}{5}$$\n\nTherefore, the value at the end of 5 years =\n\n= 5 × 1024 = 5120\n\nThus, the value of the machine at the end of 5 years will be Rs 5120.\n\nAnswered by Abhisek | 1 year ago\n\n### Related Questions\n\n#### Construct a quadratic in x such that A.M. of its roots is A and G.M. is G.\n\nConstruct a quadratic in x such that A.M. of its roots is A and G.M. is G.\n\n#### Find the two numbers whose A.M. is 25 and GM is 20.\n\nFind the two numbers whose A.M. is 25 and GM is 20.\n\n#### If a is the G.M. of 2 and 1/4 find a.\n\nIf a is the G.M. of 2 and $$\\dfrac{1}{4}$$ find a.\n\n#### Find the geometric means of the following pairs of numbers\n\nFind the geometric means of the following pairs of numbers:\n\n(i) 2 and 8\n\n(ii) a3b and ab3\n\n(iii) –8 and –2\n\nInsert 5 geometric means between $$\\dfrac{32}{9}$$ and $$\\dfrac{81}{2}$$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8977075,"math_prob":0.999333,"size":811,"snap":"2023-14-2023-23","text_gpt3_token_len":282,"char_repetition_ratio":0.14622056,"word_repetition_ratio":0.026845638,"special_character_ratio":0.38964242,"punctuation_ratio":0.1573604,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999724,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-24T22:19:18Z\",\"WARC-Record-ID\":\"<urn:uuid:e38092ad-83c5-46c3-a52e-ea65235dbe6e>\",\"Content-Length\":\"51377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:160dcadf-1aca-4c69-a228-9f884477261d>\",\"WARC-Concurrent-To\":\"<urn:uuid:125e095a-79c6-4087-8074-1b1929bfae1c>\",\"WARC-IP-Address\":\"184.168.103.26\",\"WARC-Target-URI\":\"https://www.elitedigitalstudy.com/12059/a-manufacturer-reckons-that-the-value-of-a-machine-which-costs-him-rs-15625\",\"WARC-Payload-Digest\":\"sha1:4NGOIHLARZ7QVWPFOMCRP5SLR7N54RII\",\"WARC-Block-Digest\":\"sha1:Q6TNGN2HWF6PYF7FVB6RJWKYLOW2NONJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945289.9_warc_CC-MAIN-20230324211121-20230325001121-00756.warc.gz\"}"}
https://physics.stackexchange.com/questions/303647/why-dont-electrons-in-a-fission-reaction-run-out-due-to-chain-reactions
[ "Why don't electrons in a fission reaction run out due to chain reactions?\n\nI was watching an explanation (found here) on nuclear fission. In the video, she described the process of fission to happen one a random neutron smashes into a uranium nucleus. This causes the nucleus to split into krpyton and barium, taking part of the nucleon and electrons with it, along with a few extra neutrons. The extra neutrons then smash into other uranium nucleuses, causing the chain reaction.\n\nMy question is, if you start with only $x$ electrons in the original uranium atom, you must run out of electrons soon in the chain reaction. Where do the extra electrons, needed to continue the chain reaction, come from?\n\n• The electrons don't really play a part in the (neutron+Uranium) fission reaction and the total number of electrons remains unchanged. Are you sure you didn't mean to ask something else? – Squid Jan 8 '17 at 4:48\n\nA possible fission reaction equation is as follows:\n\n$$^{235}_ {\\;92}\\rm U +_0^1n\\rightarrow ^{236}_ {\\;92}U\\rightarrow ^{140}_ {\\;54}Xe+ ^{94}_ {38}Sr+2_0^1n$$\n\nThis equation is balanced if neutral atoms are produced.\nEven if ions were produced there would still balance of positive and negative charges.\n\n$\\rm ^{140}_ {\\;54}Xe$ and $\\rm ^{94}_ {38}Sr$ are radioactive because they are neutron rich and undergo beta decays which converts neutrons into protons in the nucleus with the emission of fast moving electrons (beta particles).\n\n$\\rm ^{140}_ {\\;54}Xe$ undergoes four beta decays and ends up as $\\rm ^{140}_ {\\;58}Ce^{4+}$ and four electrons have been emitted.\nEventually the $\\rm ^{140}_ {\\;54}Xe^{4+}$ finds four electrons (or the intermediate products collect electrons) and a neutral atom is produced.\nSimilarly the $\\rm ^{94}_ {38}Sr$ undergoes two beta decays, collects two electrons and forms an atom of $\\rm ^{94}_ {40}Zr$\n\nThe electrons balance the number of protons in matter, and the total charge is zero, and there is the law of conservation of charge.\n\nProtons and electrons do not decay . There can be no running out of electrons (or protons) except temporarily , until the new nuclei gather by electromagnetic attraction the necessary electrons for the new nuclei to become neutral." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92331046,"math_prob":0.97687393,"size":626,"snap":"2019-43-2019-47","text_gpt3_token_len":135,"char_repetition_ratio":0.13665594,"word_repetition_ratio":0.0,"special_character_ratio":0.19808307,"punctuation_ratio":0.11570248,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98438495,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-22T05:57:37Z\",\"WARC-Record-ID\":\"<urn:uuid:1ebdae2f-b9df-43d2-b57c-a400b54a62fe>\",\"Content-Length\":\"148600\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:434976de-3257-4ea2-8da7-c9130b2069f7>\",\"WARC-Concurrent-To\":\"<urn:uuid:a42e828a-df72-4bfd-9b3d-6ab2a1beb2eb>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/303647/why-dont-electrons-in-a-fission-reaction-run-out-due-to-chain-reactions\",\"WARC-Payload-Digest\":\"sha1:AFWPNRKWOUOGVW7ONBTINRS7GVGTZIGR\",\"WARC-Block-Digest\":\"sha1:NNNCZSWRLLUKRMRWI4J3NNOAR2U7JZ6L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987803441.95_warc_CC-MAIN-20191022053647-20191022081147-00111.warc.gz\"}"}
https://www.thinkdataanalytics.com/data-science-interview-questions/
[ "# 20 Data Science Interview Questions for a Beginner\n\n## Introduction\n\nSuccess is a process not an event.\n\nData Science is growing rapidly in all sectors. With the availability of so many technologies within the Data Science domain, it becomes tricky to crack any Data Science interview. In this article, we have tried to cover the most common Data Science interview questions asked by recruiters.\n\nThe most important concepts and interview questions are as follows :\n\n### 1. What is Linear Regression. What are the Assumptions involved in it?\n\nAnswer : The question can also be phrased as to why linear regression is not a very effective algorithm.\n\nVisit here: Data Analytics Companies\n\nLinear Regression is a mathematical relationship between an independent and dependent variable. The relationship is a direct proportion, relation making it the most simple relationship between the variables.\n\nY = mX+c\n\nY – Dependent Variable\n\nX – Independent Variable\n\nm and c are constants\n\nAssumptions of Linear Regression :\n\n1. The relationship between Y and X must be Linear.\n2. The features must be independent of each other.\n3. Homoscedasticity – The variation between the output must be constant for different input data.\n4. The distribution of Y along X should be the Normal Distribution.\n\n### 2. What is Logistic Regression? What is the loss function in LR?\n\nAnswer : Logistic Regression is the Binary Classification. It is a statistical model that uses the logit function on the top of the probability to give 0 or 1 as a result.\n\nThe loss function in LR is known as the Log Loss function. The equation for which is given as :\n\n### 3. Difference between Regression and Classification?\n\nAnswer :  The major difference between Regression and Classification is that Regression results in a continuous quantitative value while Classification is predicting the discrete labels.\n\nHowever, there is no clear line that draws the difference between the two. We have a few properties of both Regression and Classification. These are as follows:\n\nRegression\n\n• Regression predicts the quantity.\n• We can have discrete as well as continuous values as input for regression.\n• If input data are ordered with respect to the time it becomes time series forecasting.\n\nClassification\n\n• The Classification problem for two classes is known as Binary Classification.\n• Classification can be split into Multi- Class Classification or Multi-Label Classification.\n• We focus more on accuracy in Classification while we focus more on the error term in Regression.\n\n### 4. What is Natural Language Processing?  State some real life example of NLP.\n\nAnswer : Natural Language Processing is a branch of Artificial Intelligence that deals with the conversation of Human Language to Machine Understandable language so that it can be processed by ML models.\n\nExamples – NLP has so many practical applications including chatbots, google translate,  and many other real time applications like Alexa.\n\nSome of the other applications of NLP are in text completion, text suggestions, and sentence correction.\n\n### 5. Why do we need Evaluation Metrics. What do you understand by Confusion Matrix ?\n\nAnswer : Evaluation Metrics are statistical measures of model performance. They are very important because to determine the performance of any model it is very significant to use various Evaluation Metrics. Few of the evaluation Metrics are – Accuracy, Log Loss, Confusion Matrix.\n\nConfusion Matrix is a matrix to find the performance of a Classification model. It is in general a 2×2 matrix with one side as prediction and the other side as actual values.\n\n### 6. How does Confusion Matrix help in evaluating model performance?\n\nAnswer : We can find different accuracy measures using a confusion matrix. These parameters are Accuracy, Recall, Precision, F1 Score, and Specificity.\n\n### 7. What is the significance of Sampling? Name some techniques for Sampling?\n\nAnswer : For analyzing the data we cannot proceed with the whole volume at once for large datasets. We need to take some samples from the data which can represent the whole population. While making a sample out of complete data, we should take that data which can be a true representative of the whole data set.\n\nThere are mainly two types of Sampling techniques based on Statistics.\n\nProbability Sampling and Non Probability Sampling\n\nProbability Sampling – Simple Random, Clustered Sampling, Stratified Sampling.\n\nNon Probability Sampling – Convenience Sampling, Quota Sampling, Snowball Sampling.\n\n### 8.  What are Type 1 and Type 2 errors? In which scenarios the Type 1 and Type 2 errors become significant?\n\nAnswer :  Rejection of True Null Hypothesis is known as a Type 1 error. In simple terms, False Positive are known as a Type 1 Error.\n\nNot rejecting the False Null Hypothesis is known as a Type 2 error. False Negatives are known as a Type 2 error.\n\nType 1 Error is significant where the importance of being negative becomes significant. For example – If a man is not suffering from a particular disease marked as positive for that infection. The medications given to him might damage his organs.\n\nWhile Type 2 Error is significant in cases where the importance of being positive becomes important. For example – The alarm has to be raised in case of burglary in a bank. But a system identifies it as a False case that won’t raise the alarm on time resulting in a heavy loss.\n\n### 9. What are the conditions for Overfitting and Underfitting?\n\nIn Overfitting the model performs well for the training data, but for any new data it fails to provide output. For Underfitting the model is very simple and not able to identify the correct relationship. Following are the bias and variance conditions.\n\nOverfitting – Low bias and High Variance results in overfitted model. Decision tree is more prone to Overfitting.\n\nUnderfitting – High bias and Low Variance. Such model doesn’t perform well on test data also. For example – Linear Regression is more prone to Underfitting.\n\n### 10. What do you mean by Normalisation? Difference between Normalisation and Standardization?\n\nAnswer : Normalisation is a process of bringing the features in a simple range, so that model can perform well and do not get inclined towards any particular feature.\n\nFor example – If we have a dataset with multiple features and one feature is the Age data which is in the range 18-60 , Another feature is the salary feature ranging from 20000 – 2000000. In such a case, the values have a very much difference in them. Age ranges in two digits integer while salary is in range significantly higher than the age. So to bring the features in comparable range we need Normalisation.\n\nBoth Normalisation and Standardization are methods of Features Conversion. However, the methods are different in terms of the conversions. The data after Normalisation scales in the range of 0-1. While in case of Standardization the data is scaled such that it means comes out to be 0.\n\n11. What do you mean by Regularisation? What are L1 and L2 Regularisation?\n\nAnswer : Regulation is a method to improve your model which is Overfitted by introducing extra terms in the loss function. This helps in making the model performance better for unseen data.\n\nThere are two types of Regularisation :\n\nL1 Regularisation – In L1 we add lambda times the absolute weight terms to the loss function. In this the feature weights are penalised on the basis of absolute value.\n\nL2 Regularisation – In L2 we add lambda times the squared weight terms to the loss function. In this the feature weights are penalised on the basis of squared values.\n\n### 12. Describe Decision tree Algorithm and what are entropy and information gain?\n\nAnswer : Decision tree is a Supervised Machine Learning approach. It uses the predetermined decisions data to prepare a model based on previous output. It follows a system to identify the pattern and predict the classes or output variable from previous output .\n\nThe Decision tree works in the following manner –\n\nIt takes the complete set of Data and try to identify a point with highest information gain and least entropy to mark it as a data node and proceed further in this manner. Entropy and Information gain are deciding factor to identify the data node in a Decision Tree.\n\n### 13. What is Ensemble Learning. Give an important example of Ensemble Learning?\n\nAnswer : Ensemble Learning is a process of accumulating multiple models to form a better prediction model. In Ensemble Learning the performance of the individual model contributes to the overall development in every step. There are two common techniques in this – Bagging and Boosting.\n\nBagging – In this the data set is split to perform parallel processing of models and results are accumulated based on performance to achieve better accuracy.\n\nBoosting – This is a sequential technique in which a result from one model is passed to another model to reduce error at every step making it a better performance model.\n\nThe most important example of Ensemble Learning is Random Forest Classifier. It takes multiple Decision Tree combined to form a better performance Random Forest model.\n\n### 14. Explain Naive Bayes Classifier and the principle on which it works?\n\nAnswer : Naive Bayes Classifier algorithm is a probabilistic model. This model works on the Bayes Theorem principle.  The accuracy of Naive Bayes can be increased significantly by combining it with other kernel functions for making a perfect Classifier.\n\nBayes Theorem –  This is a theorem which explains the conditional probability. If we need to identify the probability of occurrence of Event A provided the Event B has already occurred such cases are known as Conditional Probability.\n\n### 15. What is Imbalanced Data? How do you manage to balance the data?\n\nAnswer : If a data is distributed across different categories and the distribution is highly imbalance. Such data are known as Imbalance Data. These kind of datasets causes error in model performance by making category with large values significant for the model resulting in an inaccurate model.\n\nThere are various techniques to handle imbalance data. We can increase the number of samples for minority classes. We can decrease the number of samples for classes with extremely high numbers of data points. We can use a cluster based technique to increase number of Data points for all the categories.\n\n### 16. Explain Unsupervised Clustering approach?\n\nAnswer : Grouping the data into different clusters based on the distribution of data is known as Clustering technique.\n\nThere are various Clustering Techniques –\n\n1. Density Based Clustering – DBSCAN , HDBSCAN\n\n2. Hierarchical Clustering.\n\n3. Partition Based Clustering\n\n4. Distribution Based Clustering.\n\n### 17. Explain DBSCAN Clustering technique and in what terms DBSCAN is better than K- Means Clustering?\n\nAnswer : DBSCAN( Density Based) clustering technique is an unsupervised approach which splits the vectors into different groups based on the minimum distance and number of points lying in that range. In DBSCAN Clustering we have two significant parameters –\n\nEpsilon – The minimum radius or distance between the two data points to tag them in the same cluster.\n\nMin – Sample Points – The number of minimum sample which should fall under that range to be identified as one cluster.\n\nDBSCAN Clustering technique has few advantages over other clustering algorithms –\n\n1. In DBSCAN we do not need to provide the fixed number of clusters. There can be as many clusters formed on the basis of the data points distribution. While in k nearest neighbour we need to provide the number of clusters we need to split our data into.\n\n2. In DBSCAN we also get a noise cluster identified which helps us in identifying the outliers. This sometimes also acts as a significant term to tune the hyper parameters of a model accordingly.\n\n### 18.  What do you mean by Cross Validation. Name some common cross Validation techniques?\n\nAnswer : Cross Validation is a model performance improvement technique. This is a Statistics based approach in which the model gets to train and tested with rotation within the training dataset so that model can perform well for unknown or testing data.\n\nIn this the training data are split into different groups and in rotation those groups are used for validation of model performance.\n\nThe common Cross Validation techniques are –\n\nK- Fold Cross Validation\n\nLeave p-out Cross Validation\n\nLeave-one-out cross-validation.\n\nHoldout method\n\n### 19.  What is Deep Learning ?\n\nAnswer : Deep Learning is the branch of Machine Learning and AI which tries to achieve better accuracy and able to achieve complex models. Deep Learning models are similar to human brains like structure with input layer, hidden layer, activation function and output layer designed in a fashion to give a human brain like structure.\n\nDeep Learning have so many real time applications –\n\nSelf Driving Cars\n\nComputer Vision and Image Processing\n\nReal Time Chat bots\n\nHome Automation Systems\n\n### 20 . Difference between RNN and CNN?", null, "" ]
[ null, "https://www.thinkdataanalytics.com/wp-content/uploads/2020/11/cropped-ThinkDataAnalytics-Favicon-150x150.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9179205,"math_prob":0.8916039,"size":13990,"snap":"2023-40-2023-50","text_gpt3_token_len":2738,"char_repetition_ratio":0.11854712,"word_repetition_ratio":0.02396514,"special_character_ratio":0.19278055,"punctuation_ratio":0.09879518,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9867781,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T07:31:42Z\",\"WARC-Record-ID\":\"<urn:uuid:8d05ce16-49ea-41b1-bc39-1beda5b60f03>\",\"Content-Length\":\"204029\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a596e32b-b34e-4e4f-8333-019a1aca3678>\",\"WARC-Concurrent-To\":\"<urn:uuid:81eb2432-f244-4574-9cea-690228919597>\",\"WARC-IP-Address\":\"35.213.138.76\",\"WARC-Target-URI\":\"https://www.thinkdataanalytics.com/data-science-interview-questions/\",\"WARC-Payload-Digest\":\"sha1:TKHXLNJG3MXKH3EM3L6H6SZJWUGUAY6L\",\"WARC-Block-Digest\":\"sha1:2KPGHAGQJYAK36NVOKZ5RDHEYAMHVZF3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510498.88_warc_CC-MAIN-20230929054611-20230929084611-00083.warc.gz\"}"}
https://www.scribd.com/document/329891187/T03240010220133079assignment-chapter1-3
[ "You are on page 1of 3\n\n# Set of Problems from David Tarnoff Book:\n\nChapter 1 :\n1. Define the term \"sample\" as it applies to digital systems.\n2. Define the term \"sampling rate\" as it applies to digital systems.\n3. What are the two primary problems that sampling could cause?\n4. Name the three parts of the system used to input an analog signal\ninto a digital system and describe their purpose.\n5. Name four benefits of a digital system over an analog system.\n6. Name three drawbacks of a digital system over an analog system.\n7. True or False: Since non-periodic pulse trains do not have a\npredictable format, there are no defining measurements of the\nsignal.\n8. If a computer runs at 12.8 GHz, what is the period of its clock\nsignal?\n9. If the period of a periodic pulse train is 125 nanoseconds, what is\nthe signal's frequency?\n10. If the period of a periodic pulse train is 50 microseconds, what\nshould the pulse width, tw, be to achieve a duty cycle of 15%?\n11. True or False: A signals frequency can be calculated from its duty\ncycle alone.\nChapter 2:\n1. What is the minimum number of bits needed to represent 76810\nusing unsigned binary representation?\n2. What is the largest possible integer that can be represented with a\n6-bit unsigned binary number?\n3. Convert each of the following values to decimal.\na) 100111012\nb) 101012\nc) 1110011012\nd) 011010012\n4. Convert each of the following values to an 8-bit unsigned binary\nvalue.\na) 3510\nb) 10010\nc) 22210\nd) 14510\n5. If an 8-bit binary number is used to represent an analog value in\nthe range from 010 to 10010, what does the binary value 011001002\nrepresent?\n6. If an 8-bit binary number is used to represent an analog value in\nthe range from 32 to 212, what is the accuracy of the system? In\nother words, if the binary number is incremented by one, how\nmuch change does it represent in the analog value?\n7. Assume a digital to analog conversion system uses a 10-bit integer\nto represent an analog temperature over a range of -25oF to 125oF.\nIf the actual temperature being read was 65.325oF, what would be\n\n## the closest possible value that the system could represent?\n\n8. What is the minimum sampling rate needed in order to successfully\ncapture frequencies up to 155 KHz in an analog signal?\n9. Convert the following numbers to hexadecimal.\na) 10101111001011000112\nb) 100101010010011010012\nc) 011011010010100110012\nd) 101011001000102\n10. Convert each of the following hexadecimal values to binary.\nb) 1DEF16\nc) 864516\nd) 925A16\na) ABCD16\n4142 Computer Organization and Design Fundamentals\n11. True or False: A list of numbers to be added would be a good\ncandidate for conversion using BCD.\n12. Determine which of the following binary patterns represent valid\nBCD numbers (signed or unsigned). Convert the valid ones to\ndecimal.\na.) 1010111100101100011\nb.) 10010101001001101001\nc.) 01101101001010011001\nd.) 11000110010000010000\ne.) 1101100101110010\nf.) 111100010010010101101000\ng.) 10101100100010\n13. Convert the decimal number 9640410 to BCD.\n14. Create the 5-bit Gray code sequence.\nChapter 3:\n1. True or False: 011010112 has the same value in both unsigned and\n2's complement form.\n2. True or False: The single-precision floating-point number\n10011011011010011011001011000010 is negative.\n3. What is the lowest possible value for an 8-bit signed magnitude\nbinary number?\n4. What is the highest possible value for a 10-bit 2's complement\nbinary number?\n5. Convert each of the following decimal values to 8-bit 2's\ncomplement binary.\na) 5410\nb) 4910 c) 12810\nd) 6610\ne) 9810\n6. Convert each of the following 8-bit 2's complement binary\nnumbers to decimal.\na) 100111012\nb) 000101012 c) 111001102\nd) 011010012\n\n## 70 Computer Organization and Design Fundamentals\n\n7. Convert each of the following decimal values to 8-bit signed\nmagnitude binary.\na) 5410\nb) 4910 c) 12710\nd) 6610\ne) 9810\n8. Convert each of the following 8-bit signed magnitude binary\nnumbers to decimal.\na) 100111012\nb) 000101012 c) 111001102\nd) 011010012\n9. Convert 1101.00110112 to decimal.\n10. Convert 10101.111012 to decimal.\n11. Convert 1.00011011101 x 234 to IEEE Standard 754 for singleprecision floating-point values.\n12. Convert the IEEE Standard 754 number\n11000010100011010100000000000000 to its binary equivalent.\n13. Using hexadecimal arithmetic, add 4D23116 to A413F16.\n14. Using BCD arithmetic, add 0111010010010110 to\n1000001001100001.\n15. Why is the method of shifting bits left or right to produce\nmultiplication or division results by a power of 2 preferred?\n16. How many positions must the number 00011011012 be shifted left\nin order to multiply it by 8?\n17. True or False: Adding 011011012 to 101000102 in 8-bit unsigned\nbinary will cause an overflow.\n18. True or False: Adding 011011012 to 101000102 in 8-bit 2's\ncomplement binary will cause an overflow.\n19. What would be the best binary representation for each of the\nfollowing applications?\n- Phone number\n- Age (positive integer)\n- Exam grade\n- Checking account balance\n- Value read from a postal scale\n- Price" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72482437,"math_prob":0.9149482,"size":4976,"snap":"2019-35-2019-39","text_gpt3_token_len":1383,"char_repetition_ratio":0.14400643,"word_repetition_ratio":0.12713936,"special_character_ratio":0.35751608,"punctuation_ratio":0.13160622,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.99596995,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-18T02:44:28Z\",\"WARC-Record-ID\":\"<urn:uuid:04e6b344-4548-4786-95c8-0c2465809326>\",\"Content-Length\":\"259252\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4f016852-29fe-4a4a-8585-e84dbde1c562>\",\"WARC-Concurrent-To\":\"<urn:uuid:b9d4fb69-14bb-4c07-a0d3-3d851dd96ba4>\",\"WARC-IP-Address\":\"151.101.202.152\",\"WARC-Target-URI\":\"https://www.scribd.com/document/329891187/T03240010220133079assignment-chapter1-3\",\"WARC-Payload-Digest\":\"sha1:O4NXWMFPYC6QYRUZI2GDZI3Y4JWQSAMP\",\"WARC-Block-Digest\":\"sha1:DJGBPSKC4VUD44446OFHRAFVVEBBPTFL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027313589.19_warc_CC-MAIN-20190818022816-20190818044816-00212.warc.gz\"}"}
https://www.transtutors.com/questions/calculating-financial-ratios-business-is-a-numbers-game-and-survival-in-the-marketpl-2533320.htm
[ "# Calculating Financial Ratios Business is a numbers game, and survival in the marketplace depends ...", null, "Calculating Financial Ratios Business is a numbers game, and survival in the marketplace depends on having the right numbers in black ink. It is necessary to compare your venture to other businesses in your industry by calculating key financial ratios. Review the financial ratios in Figure 8.10. Go to BizStats by BizMiner and select “Financial Benchmark Reports” – Select “Sole Proprietorship” or “Corporation” or “S Corp” – Select your Industry (for example, Retail – food & beverage), then you will see various business statistics and financial ratios. Look at the Industry Financial Ratios for your industry at the bottom, and recap this specific information in a 1-2 page paper including the industry you chose.\n\nFinancial Ratios Ratio Name Balance Sheet Ratios Current Debt-to-worth NGAGE Learning ryahoo.com Income Statement Ratios www.Transtutors.com Gross margin Net margin Overall Efficiency Ratios Sales to assets Return on assets Return on investment Specific Efficiency Ratios Inventory turnover Inventory turm-days figure 8.10 How to Calculate What It Means in Dollars and Cents Measures solvency the number of dolars in current assets for Current Assets cument Liab ties every $1 in current liabilities Example: A current ratio of 176 means that for every S1 of liabilities, the firm has$1.76 in current assets with which to pay it. Cash Accounts Receivable Measures liquidity: the number of dolars in cash and accounts receivable for each $1 in current liabilities urrent Liabilities Example: A quick ratio of 1.14 means that for every s1 of current liabilities, the firm has$1,14 in cash and accounts receivable with which to pay it. Measures liquidity more stricty: the number of dollars in cash for Current Liabilities every $1 in current liabilities Example: A cash ratio of 0.17 means that for every s1 of current liablities, the firm has$0.17 in cash with which to pay it. Measures financial risk: the number of dollars of debt owed for Total Liabilitiet every $1 in net worth Example: A debt toworth ratio of 106 means that for every$1 net worth the owners have invested, the firms owes $106 of debt Measures profitability at the gross profit level the number of dot Gross Margin lars of gross margin produced for every S1 of sales Sales Example: A gross margin ratio of 34 A% means that for every$1 of sales, the firm produces 344 cents of gross margin Measures profitability at the net profit level: the number of dolars Net Profit before Tax of net profit produced for every $1 of sales Sales Example: A net margin ratio of 2.9% means that for very$1 of sales, the firm produces 29 cents of net margin. Measures the efficiency of total assets in generating sales: the Sales number of dollars in sales produced for every $1 invested in total Total Assests assets Example: A sales to assets ratio of 2.35 means that for every$1 dollar invested in total assets, the firm generates $2.35 in sales. Measures the efficiency of total assets in generating net profit the Net Profit before Tax number of dollars in net profit produced for every S1 invested in otal Assests total assets Example: A return on assets ratio of 7.1% means that for every$1 invested in assets, the firm is generating 7.1 cents in net profit Measures the efficiency of net worth in generating net profit the Net Profit before Tax number of dolars in net profit produced for every $1 invested in Net Worth net worth Example: A return on investment ratio of 16.1% means that for very$1 invested in net worth, the firm is generating 16.1 cents in net profit before tax. Measures the rate at which inventory is being used on an annual Cost of Goods sold basis nventory Example: An inventory turnover ratio of 981 means that the aver age dollar volume of inventory is used up almost ten tmes during the fiscal year. Converts the inventory turnover ratio into an average \"days inven- tory on hand\" figure inventory Turnover Example: An inventory turn-days ratio of 37 means that the firm keeps an average of 37 days of inventory on hand throughout the foonthovedChegg Copyright CENG", null, "## Plagiarism Checker\n\nSubmit your documents and get free Plagiarism report\n\nFree Plagiarism Checker" ]
[ null, "https://files.transtutors.com/questions/transtutors004/images/transtutors004_c584c7b4-6641-4a61-934e-44ef2e6dd3d6.png", null, "https://files.transtutors.com/resources/images/answer-blur.webp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89410275,"math_prob":0.9005777,"size":1977,"snap":"2020-24-2020-29","text_gpt3_token_len":424,"char_repetition_ratio":0.11809427,"word_repetition_ratio":0.0955414,"special_character_ratio":0.20182094,"punctuation_ratio":0.13535912,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9800644,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-26T12:49:03Z\",\"WARC-Record-ID\":\"<urn:uuid:b3b47893-66d9-498c-b61a-64c4efbbc5db>\",\"Content-Length\":\"85610\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c43f653-7739-4f9e-8fdb-fe2c20549efd>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3a5aa6f-e087-4c16-bf9b-7554036c18b3>\",\"WARC-IP-Address\":\"34.224.249.39\",\"WARC-Target-URI\":\"https://www.transtutors.com/questions/calculating-financial-ratios-business-is-a-numbers-game-and-survival-in-the-marketpl-2533320.htm\",\"WARC-Payload-Digest\":\"sha1:BJ25N4Z4UKAGPTCOIFFCATR4RIYWM6YH\",\"WARC-Block-Digest\":\"sha1:2T2JP6Z4YZGNJAZ4SXP3OTENVC7DUNCH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347390758.21_warc_CC-MAIN-20200526112939-20200526142939-00294.warc.gz\"}"}
https://leanprover-community.github.io/archive/stream/217875-Is-there-code-for-X%3F/topic/Exponential.html
[ "## Stream: Is there code for X?\n\n### Topic: Exponential\n\n####", null, "Ashvni Narayanan (Dec 16 2020 at 16:21):\n\nIs there a file which describes real.exp^x as a tsum or sum?\n\nAny help is appreciated, thank you!\n\n####", null, "Eric Wieser (Dec 16 2020 at 16:24):\n\nPerhaps something near src#complex.exp'?\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 16:25):\n\nWhat do you mean by real.exp^x? Do you just mean real.exp?\n\n####", null, "Ashvni Narayanan (Dec 16 2020 at 16:28):\n\nYes, sorry, I mean the Taylor series corresponding to e^x where e is real.exp\n\n####", null, "Johan Commelin (Dec 16 2020 at 16:28):\n\nNo, exp is the function, not the number\n\n####", null, "Johan Commelin (Dec 16 2020 at 16:29):\n\nreal.exp x $= e^x$\n\n####", null, "Heather Macbeth (Dec 16 2020 at 16:30):\n\nAFAIK, not currently, but @Yury G. Kudryashov is working on a refactor that will allow for this.\n\n####", null, "Heather Macbeth (Dec 16 2020 at 16:30):\n\nWhy do you need it?\n\n####", null, "Ashvni Narayanan (Dec 16 2020 at 16:32):\n\nI'm trying to prove the following lemma :\n\nlemma exp_bernoulli_neg : ∀ t : ℚ, ((∑' i : ℕ, ((bernoulli_neg i) : ℚ) * t^i / (nat.factorial i)) : ℝ) * (real.exp t - 1) = t :=\n\n\nwhere bernoulli_neg i is the ith Bernoulli number.\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 16:33):\n\nYou would be better off proving a theorem about formal power series than worrying about convergence of real power series here\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 16:33):\n\nWe have formal power series in Lean\n\n####", null, "Heather Macbeth (Dec 16 2020 at 16:43):\n\nKevin, I guess that depends on whether Ashvni needs an identity of formal power series or an identity about real functions. (But maybe you know her intended application?)\n\nIndeed I do :-)\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 16:58):\n\nI think that the formal power series identity will boil down to checking a fact about Bernouilli numbers which she already knows (because she showed me a proof last week), whereas checking the real identity will involve a whole lot of kerfuffle ultimately reducing to checking the formal power series identity. What do you think Ashvni? The definition of formal power series multiplication and the fact that two formal power series are equal iff they have the same coefficients should reduce you directly to something you know already, with the advantage that you can stay working over the rationals.\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 17:00):\n\nI think this raises a more general question about \"formal exponentiation\", which has come up before. In stuff like p-adic analysis one still has exponentiation exp x and logarithm log (1 + x) but they're defined by formal power series (and of course the radius of convergence story is completely different). It would be nice to know that exp and log are locally inverse to each other when defined on e.g. the p-adic numbers. Here the slick proof is to use real analytic methods to get the power series for exp and log, observe they're inverses on the reals, deduce that they're inverses as formal power series, and then deduce that they're inverses on the p-adics!\n\n####", null, "Mario Carneiro (Dec 16 2020 at 17:07):\n\nIn that case we should probably have a set of theorems that relate formal power series to real power series (or power series that converge in a more general topological group)\n\n####", null, "Ashvni Narayanan (Dec 16 2020 at 17:13):\n\nKevin Buzzard said:\n\nI think that the formal power series identity will boil down to checking a fact about Bernouilli numbers which she already knows (because she showed me a proof last week), whereas checking the real identity will involve a whole lot of kerfuffle ultimately reducing to checking the formal power series identity. What do you think Ashvni? The definition of formal power series multiplication and the fact that two formal power series are equal iff they have the same coefficients should reduce you directly to something you know already, with the advantage that you can stay working over the rationals.\n\nIn the specific case of Bernoulli numbers and polynomials, I believe working with formal power series will be fine. I am more concerned with the proofs that will follow for p-adic L functions and zeta functions. As far as I can see, according to Washington, I think one can do with the formal power series identities.\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 17:27):\n\nJust as for the reals (and indeed the proof is even easier), when you're in a domain where your power series are converging, the sum/product as a function agrees with the sum/product as a power series. I think Mario is right -- one could imagine a cute little API here (probably one needs topological rings if one is dealing with power series, although it will boil down to topological groups at some point).\n\n####", null, "Ashvni Narayanan (Dec 16 2020 at 18:23):\n\nI see, thank you! Maybe I could do it at some point in the future.\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 19:45):\n\nThere is already an evaluation map from polynomials to rings, and I guess people spent some time figuring out how to set it up (I wasn't following it closely but there was eval and aeval and maybe even eval_2 or something). I guess the thing to do would be to see what decision people ultimately came to, and then set up the same system with $f\\in R[[X]]$ and $t\\in M$ where $M$ is a topological ring which is also an $R$-algebra; if the formal power series defining $f(t)$ is summable then one can evaluate $f$ at $t$; one can then attempt to prove that if $f(t)$ and $g(t)$ are both summable then $(f+g)(t)=f(t)+g(t)$ and $(fg)(t)=f(t)g(t)$. These are the sorts of proofs which can sometimes be done in about 3 lines if you know the filter tricks I suspect.\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:03):\n\ndo we have the truncation map from a formal power series to a polynomial?\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:05):\n\nThat seems easier, and we will need lots of properties about it in order for the theory of formal series evaluation in a topological ring to go well, because it's basically the limit of that map as the truncation order tends to $\\infty$\n\n####", null, "Heather Macbeth (Dec 16 2020 at 20:05):\n\nI think part of the issue is making it work for multivariate polynomials and power series; this is what Yury's working on.\n\n####", null, "Johan Commelin (Dec 16 2020 at 20:06):\n\nhttps://leanprover-community.github.io/mathlib_docs/ring_theory/power_series.html#power_series.trunc\n\n####", null, "Heather Macbeth (Dec 16 2020 at 20:07):\n\nIf I understand correctly there will be a map from docs#mv_power_series to docs#formal_multilinear_series\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:07):\n\noh, doing this mutivariate sounds like fun, there are so many ways to take the limit\n\n####", null, "Johan Commelin (Dec 16 2020 at 20:08):\n\nright, that's why I only defined trunc in the single-variable case\n\n####", null, "Johan Commelin (Dec 16 2020 at 20:09):\n\nit's not exactly clear what the right version for trunc would be in general\n\n####", null, "Heather Macbeth (Dec 16 2020 at 20:09):\n\nand then the general theory, docs#formal_multilinear_series.le_radius_of_bound etc, will be used to construct the standard functions such as exp.\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:10):\n\nyou could throw it into tsum, I suppose it will take the limit on the filter of monomials wrt inclusion, but it's not entirely clear to me if that's the limit you want (it's the \"strongest\" kind of limit you can ask for)\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:10):\n\nit's equivalent to absolute convergence for functions on nat\n\n####", null, "Adam Topaz (Dec 16 2020 at 20:14):\n\nJohan Commelin said:\n\nit's not exactly clear what the right version for trunc would be in general\n\nFor power series the mathematically correct notion is the image of the power series in $A[X_1,\\ldots,X_n]/(X_1,\\ldots,X_n)^M$, i.e. you kill all monomials of total degree at least $M$. Laurent series is a different question altogether :)\n\n####", null, "Sebastien Gouezel (Dec 16 2020 at 20:15):\n\nOn the formal multilinear series side, this is docs#formal_multilinear_series.partial_sum\n\n####", null, "Adam Topaz (Dec 16 2020 at 20:17):\n\nOh, but mv_powerseries can have infinitely many variables... oof\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:17):\n\nThis is good because that's exactly what tsum does\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:17):\n\nwith infinitely many variables it takes the limit as you add more variables too\n\n####", null, "Adam Topaz (Dec 16 2020 at 20:19):\n\nSo algebraically this would be the limit of the mv_power_series rings over finitely many variables contained in the indexing set.\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:19):\n\nof course that means that things like $X_1+X_2+\\dots$ don't converge on $\\Bbb R[[X_1,X_2,\\dots]]$\n\nI'm confused.\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:20):\n\nEr, I mean the limit of that formal power series as mapped to $\\Bbb R$ with $X_i\\mapsto 1$ or something doesn't converge\n\n####", null, "Adam Topaz (Dec 16 2020 at 20:20):\n\nSure, you need to map to zero.\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:21):\n\nIf you map everything to 0 that's a kind of boring zeries\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:21):\n\nunintentional pun\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:22):\n\nIf you map $X_i\\mapsto 2^{-i}$ then it goes to 1 as you would expect\n\n####", null, "Adam Topaz (Dec 16 2020 at 20:23):\n\nI guess I don't know what the definition of $\\mathbb{R}[[X_1,X_2,\\ldots]]$ is... (I mean algebraically)\n\n####", null, "Adam Topaz (Dec 16 2020 at 20:23):\n\nIs it the completion w.r.t. the ideal $(X_1,X_2,\\ldots)$?\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:23):\n\nIt is something like multiset nat ->0 real\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:24):\n\noh wait no ->0 this isn't a polynomial\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:24):\n\nit's just multiset nat -> real\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:25):\n\nI don't understand your question/assertion. What is the ideal $(X_1, X_2, \\dots)$?\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:25):\n\nWe need to construct the ring before we can do algebra on it\n\n####", null, "Adam Topaz (Dec 16 2020 at 20:26):\n\nI mean take the polynomial ring $\\mathbb{R}[X_1,X_2,\\ldots]$, and take the completion of that w.r.t. $(X_1,X_2,\\ldots)$.\n\n####", null, "Adam Topaz (Dec 16 2020 at 20:27):\n\nThat's one plausible definition of $\\mathbb{R}[[X_1,X_2,\\ldots]]$.\n\n####", null, "Mario Carneiro (Dec 16 2020 at 20:34):\n\nI am no expert on these things, but that seems plausible based on the wikipedia definition. It's not obvious to me why the completion wouldn't have things like $X_1^\\infty$ in it though\n\n####", null, "Riccardo Brasca (Dec 16 2020 at 21:16):\n\nI think that $X_1 + X_2 + \\cdots$ does not exist in $\\mathbf{Z}[[X_1, X_2, \\ldots]]$ (meaning that the series does not converge), if we define the latter as the completion of $\\mathbf{Z}[X_1, X_2, \\ldots]$ w.r.t. $(X_1, X_2, \\ldots)$. The general term does not tend to $0$, right?\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 21:24):\n\nMy instinct is to set up the one variable case, do the n variable case by induction on n and forget about the general case because I'm not sure it's useful\n\nnoo\n\n####", null, "Mario Carneiro (Dec 16 2020 at 21:24):\n\ninduction on this kind of thing is painful\n\n####", null, "Mario Carneiro (Dec 16 2020 at 21:24):\n\nindex sets can usually be trivially finite or infinite\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 21:25):\n\nIn which case my instinct is to stick to a fintype when doing evaluation\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 21:25):\n\nBecause then there's no ambiguity\n\n####", null, "Mario Carneiro (Dec 16 2020 at 21:25):\n\nI think the right thing will just happen automatically\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 21:25):\n\nI think that more than one thing can happen but I'm not sure I care about what happens in the infinite variable case\n\n####", null, "Mario Carneiro (Dec 16 2020 at 21:26):\n\nthen let's just do it and let what happens happen, and pretend that's what we wanted all along\n\n####", null, "Adam Topaz (Dec 16 2020 at 21:28):\n\nThe one natural place I know of where you have to consider expressions like $X_1 + X_2 + \\cdots$ is when defining the ring of symmetric functions in countably many variables, which can be used to construct the ring of big Witt vectors. in this case you work with a subring of\n\n$\\lim_{n} \\mathbb{Z}[X_1,\\ldots,X_n]$\n\nwhere the transition maps\n\n$\\mathbb{Z}[X_1,\\ldots,X_{n+1}] \\to \\mathbb{Z}[X_1,\\ldots,X_n]$\n\nsend $X_{n+1}$ to $0$.\n\n####", null, "Riccardo Brasca (Dec 16 2020 at 21:29):\n\nIt seems to me that an explicit description of $R[[X_0, X_2, \\ldots]]$ is functions $f \\colon \\mathbf{N}^{(\\mathbf{N})} \\to R$ such that for all $d$ the set $\\{ x \\in \\mathbf{N}^{(\\mathbf{N})} \\text{ such that } f(x) \\neq 0 \\text{ and } \\sum_i x(i) = d \\}$ is finite. In practice $x$ is just a finite sequence of natural numbers, giving a monomial (the exponents of $X_0,X_1,\\ldots$ are given by the sequence) of (finite) degree $\\sum_i x(i)$. Then a series exists if and only if it has only finitely many term of degree $d$, for any $d$.\n\n####", null, "Riccardo Brasca (Dec 16 2020 at 21:30):\n\nOf course one can consider even bigger rings, but that's what we get taking completion w.r.t. $(X_1, X_2, \\ldots)$\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 21:33):\n\nIn Witt vectors I think you're working with a direct limit which is much smaller\n\n####", null, "Adam Topaz (Dec 16 2020 at 21:33):\n\nNo it's the inverse limit.\n\n####", null, "Adam Topaz (Dec 16 2020 at 21:34):\n\nWell, it's the symmetric elements in that limit.\n\n####", null, "Kevin Buzzard (Dec 16 2020 at 21:34):\n\nAah you're right but it's the inverse limit of the polynomial rings not the power series rings\n\n####", null, "Adam Topaz (Dec 16 2020 at 21:34):\n\nI like this reference for this approach: https://arxiv.org/pdf/math/0407227.pdf (see examples 2.10 and 3.2)\n\nLast updated: May 17 2021 at 16:26 UTC" ]
[ null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null, "https://leanprover-community.github.io/archive/assets/img/zulip2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.931566,"math_prob":0.88641566,"size":12238,"snap":"2021-21-2021-25","text_gpt3_token_len":3368,"char_repetition_ratio":0.1999346,"word_repetition_ratio":0.18865314,"special_character_ratio":0.28901783,"punctuation_ratio":0.1207767,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9938923,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-17T17:12:47Z\",\"WARC-Record-ID\":\"<urn:uuid:b2eb998b-4052-4b52-9688-c3ef2c42d4e9>\",\"Content-Length\":\"125260\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee0c5e0f-4210-4808-aea0-9458cf9a1177>\",\"WARC-Concurrent-To\":\"<urn:uuid:64942e73-cb68-46a8-b984-2516040c7b19>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://leanprover-community.github.io/archive/stream/217875-Is-there-code-for-X%3F/topic/Exponential.html\",\"WARC-Payload-Digest\":\"sha1:NWTYUPB4HMIHKJULO5GS77CQ5ILVWKEM\",\"WARC-Block-Digest\":\"sha1:KWZASGRDSHAP7XBWOEARLOYFDUNXHSG3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991258.68_warc_CC-MAIN-20210517150020-20210517180020-00378.warc.gz\"}"}
https://www.openaircraft.com/ccx-doc/ccx/node189.html
[ "### Crack propagation\n\nIn CalculiX a rather simple model to calculate cyclic crack propagation is implemented. In order to perform a crack propagation calculation the following procedure is to be followed:\n\n• A static calculation (usually called a Low Cycle Fatigue = LCF calculation) for the uncracked structure (using volumetric elements) for one or more steps must have been performed and the results (at least stresses; if applicable, also the temperatures) must have been stored in a frd-file.\n• Optionally a frequency calculation (usually called a High Cycle Fatigue = HCF calculation) for the uncracked structure has been performed and the results (usually stresses) have been stored in a frd-file.\n• For the crack propagation itself a model consisting of at least all cracks to be considered meshed using S3-shell elements must be created. The orientation of all shell elements used to model one and the same crack should consistent, i.e. when viewing the crack from one side of the crack shape all nodes should be numbered clockwise or all nodes should be numbered counterclockwise. Preferably, also the mesh of the uncracked structure should be contained (the crack propagation can be easier interpreted if the structure in which the crack propagates is also visualized) .\n• The material parameters for the crack propagation law implemented in CalculiX must have been determined. Alternatively, the user may code his/her own crack propagation law in routine crackrate.f.\n• The procedure *CRACK PROPAGATION must have been selected with appropriate parameters. Within the *CRACK PROPAGATION step the optional keyword card *HCF may have been selected.\n\nIn CalculiX, the following crack propagation law has been implemented:", null, "(578)\n\nwhere", null, "", null, "", null, "", null, "(579)\n\naccounts for the threshold range,", null, "", null, "", null, "", null, "(580)\n\nfor the critical cut-off and", null, "(581)\n\nfor the", null, "influence. The material constants have to be entered by using a *USER MATERIAL card with the following 8 constants per temperature data point (in that order):", null, ",", null, ",", null, ",", null, ",", null, ",", null, ",", null, "and", null, "[-], were", null, "is the unit of force and", null, "of length. Notice that the first part of the law corresponds to the Paris law. Indeed the classical Paris constant C can be obtained from:", null, "(582)\n\nVice versa,", null, "can be obtained from", null, "using the above equation once", null, "has been chosen. Notice that", null, "is the rate for which", null, "(just considering the Paris range). For a user material, a maximum of 8 constants can be defined per line (cf. *USER MATERIAL). Therefore, after entering the 8 crack propagation constants, the corresponding temperature has to be entered on a new line.\n\nThe crack propagation calculation consists of a number of increments during which the crack propagates a certain amount. For each increment in a LCF calculation the following steps are performed:\n\n• The actual shape of the cracks is analyzed, the crack fronts are determined and the stresses and temperatures (if applicable, else zero) at the crack front nodes are interpolated from the stress and temperature field in the uncracked structure.\n• The stress tensor at the front nodes is projected on the local tangent plane yielding a normal component (local y-direction), a shear component orthogonal to the crack front (local x-direction) and one parallel to the crack front (local z-direction), leading to the K-factors", null, ",", null, "and", null, "using the formulas:", null, "(583)", null, "(584)", null, "(585)\n\nwhere", null, ",", null, "and", null, "are shape factors taking the form", null, "(586)\n\nfor subsurface cracks,", null, "(587)\n\nfor surface cracks spanning an angle", null, "and", null, "(588)\n\nfor surface cracks spanning an angle of 0 (i.e. a one-sided crack in a two-dimensional plate). For an angle in between 0 and", null, "the shape factors are linearly interpolated in between the latter two formulas. In the above formulas", null, "is a local coordinate along the crack front, taking the values", null, "and", null, "at the free surface and 0 in the middle of the front. If the user prefers to use more detailed shape factors, user routine crackshape.f can be recoded.\n\n• The crack length", null, "in the above formulas is determined in two different ways, depending on the value of the parameter LENGTH on the *CRACK PROPAGATION card:\n\n• for LENGTH=CUMULATIVE the crack length is obtained by incrementally adding the crack propagation increments to the initial crack length. The initial length is determined using the LENGTH=INTERSECTION method.\n• for LENGTH=INTERSECTION a plane locally orthogonal to the crack front is constructed and subsequently a second intersection of this plane with the crack front is sought. The distance in between these intersection points is the crack length (except for a subsurface crack for which this length is divided by two). Notice that for intersection purposes the crack front for a surface crack is artificially closed by the intersection curve of the crack shape with the free surface in between the intersection points of the crack front.\n\nSubsequently, the crack length is smoothed along the crack front according to:", null, "(589)\n\nwhere the sum is over the", null, "closest nodes,", null, "is the Euclidean incremental distance between node", null, "and", null, ", and", null, "is the distance between node", null, "and the farthest of these nodes.", null, "is a fixed fraction of the total number of nodes along the front, e.g. 90 %.\n\n• From the stress factors an equivalent K-factor and deflection angle", null, "is calculated using a light modification of the formulas by Richard in order to cope with negative", null, "values as well:", null, "(590)\n\nand", null, "(591)\n\nfor", null, "and", null, "else. Subsequenty,", null, "and", null, "are smoothed in the same way as the crack length. Finally, if any of the deflection angles exceeds the maximum defined by the user (second entry underneath the *CRACK PROPAGATION card) all values along the front are scaled appropriately.\n\nNotice that at each crack front location as many", null, "and", null, "values are calculated as there are steps in the static calculation of the uncracked structure.\n\n• The crack propagation increment for this increment is determined. It is the minimum of:\n\n• The user defined value (first entry underneath the *CRACK PROPAGATION card)\n• one fifth of the minimum crack front curvature\n• one fifth of the smallest crack length\n\n• The crack propagation rate at every crack front location is determined. If there is only one step it results from the direct application of the crack propagation law with", null, ". For several steps the maximum minus the minimum of", null, "is taken. Notice that the crack rate routine is documented as a user subroutine: for missions consisting of several steps the user can define his/her own procedure for more complex procedures such as cycle extraction. The maximum value of", null, "across all crack front locations determines the number of cycles in this increment.\n\n• For each crack front node the location of the propagated node is determined. This node lies in a plane locally orthogonal to the tangent vector along the front. To this end a local coordinate system is created (the same as for the calculation of", null, ",", null, "and", null, ") consisting of:\n\n• The local tangent vector", null, ".\n• The local normal vector obtained by the mean of the normal vectors on the shell elements to which the nodal front position belongs. This vector is subsequently projected into the plane normal to", null, "and normalized to obtain a vector", null, ".\n• a vector in the propagation direction", null, ". This assumes that the tangent vector was such that the corkscrew rule points into direction", null, "when running along the crack front in direction", null, ".\n\n• Then, new nodes are created in between the propagated nodes such that they are equidistant. The target distance in between these nodes is the mean distance in between the nodes along the initial crack front.\n\n• Finally, new shell elements are generated covering the crack propagation increment and the results (K-values, crack length etc.) are stored in frd-format for visualization. Then, a new increment can start. The number of increments is governed by the INC parameter on the *STEP card.\n\nFor a combined LCF-HCF calculation, triggered by the *HCF keyword in the *CRACK PROPAGATION procedure the picture is slightly more complicated. On the *HCF card the user defines a scaling factor and a step from the static calculation on which the HCF loading is to be applied. This is usually the static loading at which the modal excitation occurs. At this step a HCF cycle is considered consisting of the LCF+HCF and the LCF-HCF loading. The effect is as follows:\n\n• If this cycle leads to propagaton and HCF propagation is not allowed (MAX CYCLE= 0 on the *HCF card; this is default) the program stops with an appropriate error message.\n• If it leads to propagation and HCF propagation is allowed (MAX CYCLE", null, "0 on the *HCF card) the number of cycles is determined to reach the desired crack propagation in this increment and the next increment is started. No LCF propagation is considered in this increment.\n• If it does not lead to HCF propagation, LCF propagation is considered for the static loading in which the LCF loading of the step to which HCF applies is repaced by LCF+HCF loading. The propagation is calculated as usual.\n\nRight now, the output of a *CRACK PROPAGATION step cannot be influenced by the user. By default a data set is created in the frd-file consisting of the following information (most of this information can be changed in user subroutine crackrate.f):\n\n• The dominant step. This is the step with the largest", null, "(over all steps).\n• DeltaKEQ: the value of", null, "for the main cycle. In the present implementation this corresponds to the largest value of", null, "(over all steps).\n• KEQMIN: the minimal value of", null, "(over all steps).\n• KEQMAX: the largest value of", null, "(over all steps).\n• K1WORST: the largest value of", null, "multiplied by its sign (over all steps).\n• K2WORST: the largest value of", null, "multiplied by its sign (over all steps).\n• K3WORST: the largest value of", null, "multiplied by its sign (over all steps).\n• PHI: the deflection angle", null, ".\n• R: the R-value of the main cycle. In the present implementation this is zero.\n• DADN: the crack propagation rate.\n• KTH: not used.\n• INC: the increment number. This is the same for all nodes along one and the same crack front.\n• CYCLES: the number of cycles since the start of the calculation. This number is common to all crack front nodes.\n• CRLENGTH: crack length.\n• DOM_SLIP: not used" ]
[ null, "https://www.openaircraft.com/ccx-doc/ccx/img1853.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1854.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1855.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1854.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1856.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1857.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1858.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1857.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1859.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1860.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1861.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1862.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1863.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1864.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1865.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1866.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1867.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1868.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1869.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1870.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1871.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1872.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1873.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img745.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1874.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1874.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1875.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1876.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1877.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1878.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1879.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1880.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1881.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1882.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1883.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1884.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1885.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1886.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1887.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1888.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1889.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1054.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1890.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1326.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img329.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1891.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img655.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1892.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img905.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1287.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img326.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1287.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img655.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img470.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1876.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1893.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1894.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1895.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1896.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1897.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img470.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1897.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img470.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1898.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1897.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1899.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1876.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1877.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1878.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1110.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1110.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img298.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1900.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img298.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1110.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1901.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1897.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1902.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1897.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1897.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1897.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1903.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1904.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img1905.png", null, "https://www.openaircraft.com/ccx-doc/ccx/img470.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87222445,"math_prob":0.9565763,"size":10115,"snap":"2022-40-2023-06","text_gpt3_token_len":2282,"char_repetition_ratio":0.15972704,"word_repetition_ratio":0.045375217,"special_character_ratio":0.21077608,"punctuation_ratio":0.089071035,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98239684,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170],"im_url_duplicate_count":[null,1,null,2,null,1,null,2,null,1,null,2,null,1,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,2,null,2,null,1,null,4,null,3,null,3,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,3,null,2,null,2,null,4,null,4,null,1,null,10,null,1,null,null,null,5,null,5,null,5,null,10,null,9,null,4,null,1,null,1,null,1,null,1,null,7,null,9,null,7,null,9,null,1,null,7,null,1,null,4,null,3,null,3,null,3,null,3,null,7,null,1,null,7,null,3,null,null,null,7,null,1,null,7,null,7,null,7,null,1,null,1,null,1,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T22:00:47Z\",\"WARC-Record-ID\":\"<urn:uuid:a2aca7c0-3fcc-43a0-b2c5-fe32a8159e16>\",\"Content-Length\":\"34806\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e9b0044-6302-4cfe-860c-d961859c57e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:514de0a6-59f6-4912-9607-dd71d3aeda9b>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://www.openaircraft.com/ccx-doc/ccx/node189.html\",\"WARC-Payload-Digest\":\"sha1:VC5FOFHV3HZ4FCVXPHLVR4ER33ROAM7Z\",\"WARC-Block-Digest\":\"sha1:LBI3H55CEQSSMQDVHZFUTLIUF5FH5DP2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337668.62_warc_CC-MAIN-20221005203530-20221005233530-00705.warc.gz\"}"}
http://mymathforum.com/differential-equations/49039-differential-equation.html
[ "Differential Equations Ordinary and Partial Differential Equations Math Forum\n\n December 24th, 2014, 06:06 AM #1 Member   Joined: Oct 2013 Posts: 36 Thanks: 0 differential equation solve the equation: dy/dx +sin(dy/dx)=x and y=0 when x=0. this is the last question on the problem sheet and nothing else like it nor have we been taught how to approach something like this. I cant see how to start this at all. any ideas?", null, "December 24th, 2014, 09:24 AM #2 Math Team   Joined: Dec 2013 From: Colombia Posts: 7,697 Thanks: 2681 Math Focus: Mainly analysis and algebra Can you check for typing errors or post a picture?", null, "December 25th, 2014, 01:13 AM   #3\nMember\n\nJoined: Oct 2013\n\nPosts: 36\nThanks: 0\n\nits question 10 on the attachment.\nmay i ask why you think there is a typo?\nAttached Images", null, "hard q.jpg (99.5 KB, 9 views)", null, "December 25th, 2014, 07:45 AM   #4\nMath Team\n\nJoined: Dec 2006\nFrom: Lexington, MA\n\nPosts: 3,267\nThanks: 408\n\nHello, fromage!\n\nQuote:\n $\\text{Solve the equation: }\\;\\dfrac{dy}{dx} + \\sin\\left(\\dfrac{dy}{dx}\\right) \\:=\\:x,\\quad \\text{with }y(0) = 0.$\n\nWe are concerned because the equation\n$\\quad$has the form: $\\:u + \\sin u \\,=\\,x$\n\nThis is a transcendental equation in which a variable is both\n$\\quad$ 'inside' and 'outside' of a transcendental function.\n\nIt cannot be solved by conventional means.\n$\\quad$The solution can only be approximated.", null, "December 25th, 2014, 07:46 AM #5 Math Team   Joined: Dec 2013 From: Colombia Posts: 7,697 Thanks: 2681 Math Focus: Mainly analysis and algebra Because at first sight it looks likely to be impossible to solve!", null, "December 27th, 2014, 01:15 AM #6 Member   Joined: Oct 2013 Posts: 36 Thanks: 0 ok guys my professor got back to me and he said it could be solved parametrically in terms of dy/dx. ie. let u=dy/dx x=u+sinu (differentiating gives)=> 1=du/dx(1+cosu) => 1=u*du/dy(1+cosu) => y=integral of (u(1+cosu))du (after doing integral and putting in condition) => y= 1/2*(dy/dx)^2+(dy/dx)*sin(dy/dx)+cos(dy/dx)-1 so here's my new question- is this useful? (i can see it answers the question but does it give us more information; to me it seems it does but i don't know if having both x and y in terms of dy/dx can help us more) and if it is useful (and i mean no offence here at all to any of you, i'm just curious as to the reasoning behind this method) why didn't you guys see this?", null, "December 30th, 2014, 12:25 AM #7 Member   Joined: Oct 2013 Posts: 36 Thanks: 0 bump?hello?", null, "December 30th, 2014, 01:33 PM #8 Senior Member   Joined: Dec 2013 From: some subspace Posts: 212 Thanks: 72 Math Focus: real analysis, vector analysis, numerical analysis, discrete mathematics Mm... I wouldn't call that solving, but arranging the equation to another form. It is not clear, which one is more useful and it will strongly depend on the situation. If one really needs to get numbers out, then I'd use the first form of the equation. It is much more simpler and needs less computations. I'd also think it is more accurate, if you need the $\\displaystyle (x,y)$ pairs because you can solve the derivative just by giving the value of $\\displaystyle x$. As for the second form, I can't find use for it quite fast. Maybe you or your professor can? Also, if you really are interested, the solution ($\\displaystyle y'$) of first form of the equation can be written as an infinite serie in terms of Bessel functions. This serie can further be integrated and you have a solution in some form.", null, "Tags differential, equation", null, "Thread Tools", null, "Show Printable Version", null, "Email this Page Display Modes", null, "Linear Mode", null, "Switch to Hybrid Mode", null, "Switch to Threaded Mode", null, "Similar Threads Thread Thread Starter Forum Replies Last Post Sonprelis Calculus 6 August 6th, 2014 11:07 AM Robert Lownds Differential Equations 4 May 13th, 2013 09:47 AM PhizKid Differential Equations 0 February 24th, 2013 11:30 AM golomorf Differential Equations 4 August 6th, 2012 10:40 AM baby_Nway Differential Equations 4 November 21st, 2007 04:24 PM\n\n Contact - Home - Forums - Cryptocurrency Forum - Top", null, "", null, "", null, "", null, "", null, "", null, "" ]
[ null, "http://mymathforum.com/images/math/statusicon/user_offline.gif", null, "http://mymathforum.com/images/math/statusicon/user_offline.gif", null, "http://mymathforum.com/images/attach/jpg.gif", null, "http://mymathforum.com/images/math/statusicon/user_offline.gif", null, "http://mymathforum.com/images/math/statusicon/user_offline.gif", null, "http://mymathforum.com/images/math/statusicon/user_offline.gif", null, "http://mymathforum.com/images/math/statusicon/user_offline.gif", null, "http://mymathforum.com/images/math/statusicon/user_offline.gif", null, "http://mymathforum.com/images/math/statusicon/user_offline.gif", null, "http://mymathforum.com/images/math/misc/11x11progress.gif", null, "http://mymathforum.com/images/math/buttons/printer.gif", null, "http://mymathforum.com/images/math/buttons/sendtofriend.gif", null, "http://mymathforum.com/images/math/buttons/mode_linear.gif", null, "http://mymathforum.com/images/math/buttons/mode_hybrid.gif", null, "http://mymathforum.com/images/math/buttons/mode_threaded.gif", null, "http://mymathforum.com/images/math/buttons/collapse_tcat.gif", null, "http://mymathforum.com/clear.gif", null, "http://mymathforum.com/clear.gif", null, "http://mymathforum.com/clear.gif", null, "http://mymathforum.com/clear.gif", null, "http://mymathforum.com/clear.gif", null, "http://mymathforum.com/clear.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89882386,"math_prob":0.8730312,"size":3894,"snap":"2019-43-2019-47","text_gpt3_token_len":1144,"char_repetition_ratio":0.1311054,"word_repetition_ratio":0.06605223,"special_character_ratio":0.29943502,"punctuation_ratio":0.14017522,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9751894,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-17T01:46:58Z\",\"WARC-Record-ID\":\"<urn:uuid:c2f99c6d-5cae-4d6b-8467-7d8310080953>\",\"Content-Length\":\"47146\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8ca09690-3a0a-494f-a63a-b7e901497fb3>\",\"WARC-Concurrent-To\":\"<urn:uuid:0ed280b7-0cc7-468a-89be-20605cb68167>\",\"WARC-IP-Address\":\"138.68.26.129\",\"WARC-Target-URI\":\"http://mymathforum.com/differential-equations/49039-differential-equation.html\",\"WARC-Payload-Digest\":\"sha1:6GJZJK32Q5PCBPXAJ32S2ZVPHQJJ3RNH\",\"WARC-Block-Digest\":\"sha1:3UICJQJV7GGP4HEREOA2KBCEQKY4QB5T\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668782.15_warc_CC-MAIN-20191117014405-20191117042405-00161.warc.gz\"}"}
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-u-int-array/index.html
[ "# UIntArray\n\nCommon\nJVM\nJS\nNative\n1.3\n`@ExperimentalUnsignedTypes inline class UIntArray :     Collection<UInt>`\n\n### Constructors\n\nCommon\nJVM\nJS\nNative\n1.0\n\n#### <init>\n\nCreates a new array of the specified size, with all elements initialized to zero.\n\n`UIntArray(size: Int)`\n\n### Properties\n\nCommon\nJVM\nJS\nNative\n1.0\n\n#### size\n\nReturns the number of elements in the array.\n\n`val size: Int`\n\n### Functions\n\nCommon\nJVM\nJS\nNative\n1.0\n\n#### contains\n\nChecks if the specified element is contained in this collection.\n\n`fun contains(element: UInt): Boolean`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### containsAll\n\nChecks if all elements in the specified collection are contained in this collection.\n\n`fun containsAll(elements: Collection<UInt>): Boolean`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### get\n\nReturns the array element at the given index. This method can be called using the index operator.\n\n`operator fun get(index: Int): UInt`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### isEmpty\n\nReturns `true` if the collection is empty (contains no elements), `false` otherwise.\n\n`fun isEmpty(): Boolean`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### iterator\n\nCreates an iterator over the elements of the array.\n\n`operator fun iterator(): UIntIterator`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### set\n\nSets the element at the given index to the given value. This method can be called using the index operator.\n\n`operator fun set(index: Int, value: UInt)`\n\n### Extension Properties\n\nCommon\nJVM\nJS\nNative\n1.3\n\n#### indices\n\nReturns the range of valid indices for the array.\n\n`val UIntArray.indices: IntRange`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### lastIndex\n\nReturns the last valid index for the array.\n\n`val UIntArray.lastIndex: Int`\n\n### Extension Functions\n\nCommon\nJVM\nJS\nNative\n1.0\n\n#### all\n\nReturns `true` if all elements match the given predicate.\n\n`fun UIntArray.all(predicate: (UInt) -> Boolean): Boolean`\n`fun <T> Iterable<T>.all(predicate: (T) -> Boolean): Boolean`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### any\n\nReturns `true` if array has at least one element.\n\n`fun UIntArray.any(): Boolean`\n\nReturns `true` if at least one element matches the given predicate.\n\n`fun UIntArray.any(predicate: (UInt) -> Boolean): Boolean`\n`fun <T> Iterable<T>.any(predicate: (T) -> Boolean): Boolean`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### asIntArray\n\nReturns an array of type IntArray, which is a view of this array where each element is a signed reinterpretation of the corresponding element of this array.\n\n`fun UIntArray.asIntArray(): IntArray`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### asIterable\n\nReturns this collection as an Iterable.\n\n`fun <T> Iterable<T>.asIterable(): Iterable<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### asSequence\n\nCreates a Sequence instance that wraps the original collection returning its elements when being iterated.\n\n`fun <T> Iterable<T>.asSequence(): Sequence<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### associate\n\nReturns a Map containing key-value pairs provided by transform function applied to elements of the given collection.\n\n`fun <T, K, V> Iterable<T>.associate(    transform: (T) -> Pair<K, V>): Map<K, V>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### associateBy\n\nReturns a Map containing the elements from the given collection indexed by the key returned from keySelector function applied to each element.\n\n`fun <T, K> Iterable<T>.associateBy(    keySelector: (T) -> K): Map<K, T>`\n\nReturns a Map containing the values provided by valueTransform and indexed by keySelector functions applied to elements of the given collection.\n\n`fun <T, K, V> Iterable<T>.associateBy(    keySelector: (T) -> K,     valueTransform: (T) -> V): Map<K, V>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### associateByTo\n\nPopulates and returns the destination mutable map with key-value pairs, where key is provided by the keySelector function applied to each element of the given collection and value is the element itself.\n\n`fun <T, K, M : MutableMap<in K, in T>> Iterable<T>.associateByTo(    destination: M,     keySelector: (T) -> K): M`\n\nPopulates and returns the destination mutable map with key-value pairs, where key is provided by the keySelector function and and value is provided by the valueTransform function applied to elements of the given collection.\n\n`fun <T, K, V, M : MutableMap<in K, in V>> Iterable<T>.associateByTo(    destination: M,     keySelector: (T) -> K,     valueTransform: (T) -> V): M`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### associateTo\n\nPopulates and returns the destination mutable map with key-value pairs provided by transform function applied to each element of the given collection.\n\n`fun <T, K, V, M : MutableMap<in K, in V>> Iterable<T>.associateTo(    destination: M,     transform: (T) -> Pair<K, V>): M`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### associateWith\n\nReturns a Map where keys are elements from the given collection and values are produced by the valueSelector function applied to each element.\n\n`fun <K, V> Iterable<K>.associateWith(    valueSelector: (K) -> V): Map<K, V>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### associateWithTo\n\nPopulates and returns the destination mutable map with key-value pairs for each element of the given collection, where key is the element itself and value is provided by the valueSelector function applied to that key.\n\n`fun <K, V, M : MutableMap<in K, in V>> Iterable<K>.associateWithTo(    destination: M,     valueSelector: (K) -> V): M`\nJVM\n1.3\n\n#### binarySearch\n\nSearches the array or the range of the array for the provided element using the binary search algorithm. The array is expected to be sorted, otherwise the result is undefined.\n\n`fun UIntArray.binarySearch(    element: UInt,     fromIndex: Int = 0,     toIndex: Int = size): Int`\nCommon\nJVM\nJS\nNative\n1.2\n\n#### chunked\n\nSplits this collection into a list of lists each not exceeding the given size.\n\n`fun <T> Iterable<T>.chunked(size: Int): List<List<T>>`\n\nSplits this collection into several lists each not exceeding the given size and applies the given transform function to an each.\n\n`fun <T, R> Iterable<T>.chunked(    size: Int,     transform: (List<T>) -> R): List<R>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### component1\n\nReturns 1st element from the collection.\n\n`operator fun UIntArray.component1(): UInt`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### component2\n\nReturns 2nd element from the collection.\n\n`operator fun UIntArray.component2(): UInt`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### component3\n\nReturns 3rd element from the collection.\n\n`operator fun UIntArray.component3(): UInt`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### component4\n\nReturns 4th element from the collection.\n\n`operator fun UIntArray.component4(): UInt`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### component5\n\nReturns 5th element from the collection.\n\n`operator fun UIntArray.component5(): UInt`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### contains\n\nReturns `true` if element is found in the collection.\n\n`operator fun <T> Iterable<T>.contains(element: T): Boolean`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### containsAll\n\nChecks if all elements in the specified collection are contained in this collection.\n\n`fun <T> Collection<T>.containsAll(    elements: Collection<T>): Boolean`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### contentEquals\n\nReturns `true` if the two specified arrays are structurally equal to one another, i.e. contain the same number of the same elements in the same order.\n\n`infix fun UIntArray.contentEquals(other: UIntArray): Boolean`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### contentHashCode\n\nReturns a hash code based on the contents of this array as if it is List.\n\n`fun UIntArray.contentHashCode(): Int`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### contentToString\n\nReturns a string representation of the contents of the specified array as if it is List.\n\n`fun UIntArray.contentToString(): String`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### copyInto\n\nCopies this array or its subrange into the destination array and returns that array.\n\n`fun UIntArray.copyInto(    destination: UIntArray,     destinationOffset: Int = 0,     startIndex: Int = 0,     endIndex: Int = size): UIntArray`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### copyOf\n\nReturns new array which is a copy of the original array.\n\n`fun UIntArray.copyOf(): UIntArray`\n\nReturns new array which is a copy of the original array, resized to the given newSize. The copy is either truncated or padded at the end with zero values if necessary.\n\n`fun UIntArray.copyOf(newSize: Int): UIntArray`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### copyOfRange\n\nReturns a new array which is a copy of the specified range of the original array.\n\n`fun UIntArray.copyOfRange(    fromIndex: Int,     toIndex: Int): UIntArray`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### count\n\nReturns the number of elements matching the given predicate.\n\n`fun UIntArray.count(predicate: (UInt) -> Boolean): Int`\n`fun <T> Iterable<T>.count(predicate: (T) -> Boolean): Int`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### distinct\n\nReturns a list containing only distinct elements from the given collection.\n\n`fun <T> Iterable<T>.distinct(): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### distinctBy\n\nReturns a list containing only elements from the given collection having distinct keys returned by the given selector function.\n\n`fun <T, K> Iterable<T>.distinctBy(    selector: (T) -> K): List<T>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### drop\n\nReturns a list containing all elements except first n elements.\n\n`fun UIntArray.drop(n: Int): List<UInt>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### dropLast\n\nReturns a list containing all elements except last n elements.\n\n`fun UIntArray.dropLast(n: Int): List<UInt>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### dropLastWhile\n\nReturns a list containing all elements except last elements that satisfy the given predicate.\n\n`fun UIntArray.dropLastWhile(    predicate: (UInt) -> Boolean): List<UInt>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### dropWhile\n\nReturns a list containing all elements except first elements that satisfy the given predicate.\n\n`fun UIntArray.dropWhile(    predicate: (UInt) -> Boolean): List<UInt>`\n`fun <T> Iterable<T>.dropWhile(    predicate: (T) -> Boolean): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### elementAtOrElse\n\nReturns an element at the given index or the result of calling the defaultValue function if the index is out of bounds of this array.\n\n`fun UIntArray.elementAtOrElse(    index: Int,     defaultValue: (Int) -> UInt): UInt`\n\nReturns an element at the given index or the result of calling the defaultValue function if the index is out of bounds of this collection.\n\n`fun <T> Iterable<T>.elementAtOrElse(    index: Int,     defaultValue: (Int) -> T): T`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### elementAtOrNull\n\nReturns an element at the given index or `null` if the index is out of bounds of this array.\n\n`fun UIntArray.elementAtOrNull(index: Int): UInt?`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### fill\n\nFills this array or its subrange with the specified element value.\n\n`fun UIntArray.fill(    element: UInt,     fromIndex: Int = 0,     toIndex: Int = size)`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### filter\n\nReturns a list containing only elements matching the given predicate.\n\n`fun UIntArray.filter(    predicate: (UInt) -> Boolean): List<UInt>`\n`fun <T> Iterable<T>.filter(    predicate: (T) -> Boolean): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### filterIndexed\n\nReturns a list containing only elements matching the given predicate.\n\n`fun UIntArray.filterIndexed(    predicate: (index: Int, UInt) -> Boolean): List<UInt>`\n`fun <T> Iterable<T>.filterIndexed(    predicate: (index: Int, T) -> Boolean): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### filterIndexedTo\n\nAppends all elements matching the given predicate to the given destination.\n\n`fun <C : MutableCollection<in UInt>> UIntArray.filterIndexedTo(    destination: C,     predicate: (index: Int, UInt) -> Boolean): C`\n`fun <T, C : MutableCollection<in T>> Iterable<T>.filterIndexedTo(    destination: C,     predicate: (index: Int, T) -> Boolean): C`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### filterIsInstance\n\nReturns a list containing all elements that are instances of specified type parameter R.\n\n`fun <R> Iterable<*>.filterIsInstance(): List<R>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### filterIsInstanceTo\n\nAppends all elements that are instances of specified type parameter R to the given destination.\n\n`fun <R, C : MutableCollection<in R>> Iterable<*>.filterIsInstanceTo(    destination: C): C`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### filterNot\n\nReturns a list containing all elements not matching the given predicate.\n\n`fun UIntArray.filterNot(    predicate: (UInt) -> Boolean): List<UInt>`\n`fun <T> Iterable<T>.filterNot(    predicate: (T) -> Boolean): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### filterNotNull\n\nReturns a list containing all elements that are not `null`.\n\n`fun <T : Any> Iterable<T?>.filterNotNull(): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### filterNotNullTo\n\nAppends all elements that are not `null` to the given destination.\n\n`fun <C : MutableCollection<in T>, T : Any> Iterable<T?>.filterNotNullTo(    destination: C): C`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### filterNotTo\n\nAppends all elements not matching the given predicate to the given destination.\n\n`fun <C : MutableCollection<in UInt>> UIntArray.filterNotTo(    destination: C,     predicate: (UInt) -> Boolean): C`\n`fun <T, C : MutableCollection<in T>> Iterable<T>.filterNotTo(    destination: C,     predicate: (T) -> Boolean): C`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### filterTo\n\nAppends all elements matching the given predicate to the given destination.\n\n`fun <C : MutableCollection<in UInt>> UIntArray.filterTo(    destination: C,     predicate: (UInt) -> Boolean): C`\n`fun <T, C : MutableCollection<in T>> Iterable<T>.filterTo(    destination: C,     predicate: (T) -> Boolean): C`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### find\n\nReturns the first element matching the given predicate, or `null` if no such element was found.\n\n`fun UIntArray.find(predicate: (UInt) -> Boolean): UInt?`\n`fun <T> Iterable<T>.find(predicate: (T) -> Boolean): T?`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### findLast\n\nReturns the last element matching the given predicate, or `null` if no such element was found.\n\n`fun UIntArray.findLast(predicate: (UInt) -> Boolean): UInt?`\n`fun <T> Iterable<T>.findLast(predicate: (T) -> Boolean): T?`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### first\n\nReturns first element.\n\n`fun UIntArray.first(): UInt`\n\nReturns the first element matching the given predicate.\n\n`fun UIntArray.first(predicate: (UInt) -> Boolean): UInt`\n`fun <T> Iterable<T>.first(predicate: (T) -> Boolean): T`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### firstOrNull\n\nReturns the first element, or `null` if the array is empty.\n\n`fun UIntArray.firstOrNull(): UInt?`\n\nReturns the first element matching the given predicate, or `null` if element was not found.\n\n`fun UIntArray.firstOrNull(    predicate: (UInt) -> Boolean): UInt?`\n`fun <T> Iterable<T>.firstOrNull(    predicate: (T) -> Boolean): T?`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### flatMap\n\nReturns a single list of all elements yielded from results of transform function being invoked on each element of original array.\n\n`fun <R> UIntArray.flatMap(    transform: (UInt) -> Iterable<R>): List<R>`\n\nReturns a single list of all elements yielded from results of transform function being invoked on each element of original collection.\n\n`fun <T, R> Iterable<T>.flatMap(    transform: (T) -> Iterable<R>): List<R>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### flatMapTo\n\nAppends all elements yielded from results of transform function being invoked on each element of original array, to the given destination.\n\n`fun <R, C : MutableCollection<in R>> UIntArray.flatMapTo(    destination: C,     transform: (UInt) -> Iterable<R>): C`\n\nAppends all elements yielded from results of transform function being invoked on each element of original collection, to the given destination.\n\n`fun <T, R, C : MutableCollection<in R>> Iterable<T>.flatMapTo(    destination: C,     transform: (T) -> Iterable<R>): C`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### fold\n\nAccumulates value starting with initial value and applying operation from left to right to current accumulator value and each element.\n\n`fun <R> UIntArray.fold(    initial: R,     operation: (acc: R, UInt) -> R): R`\n`fun <T, R> Iterable<T>.fold(    initial: R,     operation: (acc: R, T) -> R): R`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### foldIndexed\n\nAccumulates value starting with initial value and applying operation from left to right to current accumulator value and each element with its index in the original array.\n\n`fun <R> UIntArray.foldIndexed(    initial: R,     operation: (index: Int, acc: R, UInt) -> R): R`\n\nAccumulates value starting with initial value and applying operation from left to right to current accumulator value and each element with its index in the original collection.\n\n`fun <T, R> Iterable<T>.foldIndexed(    initial: R,     operation: (index: Int, acc: R, T) -> R): R`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### foldRight\n\nAccumulates value starting with initial value and applying operation from right to left to each element and current accumulator value.\n\n`fun <R> UIntArray.foldRight(    initial: R,     operation: (UInt, acc: R) -> R): R`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### foldRightIndexed\n\nAccumulates value starting with initial value and applying operation from right to left to each element with its index in the original array and current accumulator value.\n\n`fun <R> UIntArray.foldRightIndexed(    initial: R,     operation: (index: Int, UInt, acc: R) -> R): R`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### forEach\n\nPerforms the given action on each element.\n\n`fun UIntArray.forEach(action: (UInt) -> Unit)`\n`fun <T> Iterable<T>.forEach(action: (T) -> Unit)`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### forEachIndexed\n\nPerforms the given action on each element, providing sequential index with the element.\n\n`fun UIntArray.forEachIndexed(    action: (index: Int, UInt) -> Unit)`\n`fun <T> Iterable<T>.forEachIndexed(    action: (index: Int, T) -> Unit)`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### getOrElse\n\nReturns an element at the given index or the result of calling the defaultValue function if the index is out of bounds of this array.\n\n`fun UIntArray.getOrElse(    index: Int,     defaultValue: (Int) -> UInt): UInt`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### getOrNull\n\nReturns an element at the given index or `null` if the index is out of bounds of this array.\n\n`fun UIntArray.getOrNull(index: Int): UInt?`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### groupBy\n\nGroups elements of the original array by the key returned by the given keySelector function applied to each element and returns a map where each group key is associated with a list of corresponding elements.\n\n`fun <K> UIntArray.groupBy(    keySelector: (UInt) -> K): Map<K, List<UInt>>`\n\nGroups values returned by the valueTransform function applied to each element of the original array by the key returned by the given keySelector function applied to the element and returns a map where each group key is associated with a list of corresponding values.\n\n`fun <K, V> UIntArray.groupBy(    keySelector: (UInt) -> K,     valueTransform: (UInt) -> V): Map<K, List<V>>`\n\nGroups elements of the original collection by the key returned by the given keySelector function applied to each element and returns a map where each group key is associated with a list of corresponding elements.\n\n`fun <T, K> Iterable<T>.groupBy(    keySelector: (T) -> K): Map<K, List<T>>`\n\nGroups values returned by the valueTransform function applied to each element of the original collection by the key returned by the given keySelector function applied to the element and returns a map where each group key is associated with a list of corresponding values.\n\n`fun <T, K, V> Iterable<T>.groupBy(    keySelector: (T) -> K,     valueTransform: (T) -> V): Map<K, List<V>>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### groupByTo\n\nGroups elements of the original array by the key returned by the given keySelector function applied to each element and puts to the destination map each group key associated with a list of corresponding elements.\n\n`fun <K, M : MutableMap<in K, MutableList<UInt>>> UIntArray.groupByTo(    destination: M,     keySelector: (UInt) -> K): M`\n\nGroups values returned by the valueTransform function applied to each element of the original array by the key returned by the given keySelector function applied to the element and puts to the destination map each group key associated with a list of corresponding values.\n\n`fun <K, V, M : MutableMap<in K, MutableList<V>>> UIntArray.groupByTo(    destination: M,     keySelector: (UInt) -> K,     valueTransform: (UInt) -> V): M`\n\nGroups elements of the original collection by the key returned by the given keySelector function applied to each element and puts to the destination map each group key associated with a list of corresponding elements.\n\n`fun <T, K, M : MutableMap<in K, MutableList<T>>> Iterable<T>.groupByTo(    destination: M,     keySelector: (T) -> K): M`\n\nGroups values returned by the valueTransform function applied to each element of the original collection by the key returned by the given keySelector function applied to the element and puts to the destination map each group key associated with a list of corresponding values.\n\n`fun <T, K, V, M : MutableMap<in K, MutableList<V>>> Iterable<T>.groupByTo(    destination: M,     keySelector: (T) -> K,     valueTransform: (T) -> V): M`\nCommon\nJVM\nJS\nNative\n1.1\n\n#### groupingBy\n\nCreates a Grouping source from a collection to be used later with one of group-and-fold operations using the specified keySelector function to extract a key from each element.\n\n`fun <T, K> Iterable<T>.groupingBy(    keySelector: (T) -> K): Grouping<T, K>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### ifEmpty\n\nReturns this array if it's not empty or the result of calling defaultValue function if the array is empty.\n\n`fun <C, R> C.ifEmpty(    defaultValue: () -> R): R where C : Array<*>, C : R`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### indexOf\n\nReturns first index of element, or -1 if the array does not contain element.\n\n`fun UIntArray.indexOf(element: UInt): Int`\n\nReturns first index of element, or -1 if the collection does not contain element.\n\n`fun <T> Iterable<T>.indexOf(element: T): Int`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### indexOfFirst\n\nReturns index of the first element matching the given predicate, or -1 if the array does not contain such element.\n\n`fun UIntArray.indexOfFirst(predicate: (UInt) -> Boolean): Int`\n\nReturns index of the first element matching the given predicate, or -1 if the collection does not contain such element.\n\n`fun <T> Iterable<T>.indexOfFirst(    predicate: (T) -> Boolean): Int`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### indexOfLast\n\nReturns index of the last element matching the given predicate, or -1 if the array does not contain such element.\n\n`fun UIntArray.indexOfLast(predicate: (UInt) -> Boolean): Int`\n\nReturns index of the last element matching the given predicate, or -1 if the collection does not contain such element.\n\n`fun <T> Iterable<T>.indexOfLast(    predicate: (T) -> Boolean): Int`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### intersect\n\nReturns a set containing all elements that are contained by both this set and the specified collection.\n\n`infix fun <T> Iterable<T>.intersect(    other: Iterable<T>): Set<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### isNotEmpty\n\nReturns `true` if the collection is not empty.\n\n`fun <T> Collection<T>.isNotEmpty(): Boolean`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### isNullOrEmpty\n\nReturns `true` if this nullable collection is either null or empty.\n\n`fun <T> Collection<T>?.isNullOrEmpty(): Boolean`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### joinTo\n\nAppends the string from all the elements separated using separator and using the given prefix and postfix if supplied.\n\n`fun <T, A : Appendable> Iterable<T>.joinTo(    buffer: A,     separator: CharSequence = \", \",     prefix: CharSequence = \"\",     postfix: CharSequence = \"\",     limit: Int = -1,     truncated: CharSequence = \"...\",     transform: ((T) -> CharSequence)? = null): A`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### joinToString\n\nCreates a string from all the elements separated using separator and using the given prefix and postfix if supplied.\n\n`fun <T> Iterable<T>.joinToString(    separator: CharSequence = \", \",     prefix: CharSequence = \"\",     postfix: CharSequence = \"\",     limit: Int = -1,     truncated: CharSequence = \"...\",     transform: ((T) -> CharSequence)? = null): String`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### last\n\nReturns the last element.\n\n`fun UIntArray.last(): UInt`\n\nReturns the last element matching the given predicate.\n\n`fun UIntArray.last(predicate: (UInt) -> Boolean): UInt`\n`fun <T> Iterable<T>.last(predicate: (T) -> Boolean): T`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### lastIndexOf\n\nReturns last index of element, or -1 if the array does not contain element.\n\n`fun UIntArray.lastIndexOf(element: UInt): Int`\n\nReturns last index of element, or -1 if the collection does not contain element.\n\n`fun <T> Iterable<T>.lastIndexOf(element: T): Int`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### lastOrNull\n\nReturns the last element, or `null` if the array is empty.\n\n`fun UIntArray.lastOrNull(): UInt?`\n\nReturns the last element matching the given predicate, or `null` if no such element was found.\n\n`fun UIntArray.lastOrNull(predicate: (UInt) -> Boolean): UInt?`\n`fun <T> Iterable<T>.lastOrNull(predicate: (T) -> Boolean): T?`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### map\n\nReturns a list containing the results of applying the given transform function to each element in the original array.\n\n`fun <R> UIntArray.map(transform: (UInt) -> R): List<R>`\n\nReturns a list containing the results of applying the given transform function to each element in the original collection.\n\n`fun <T, R> Iterable<T>.map(transform: (T) -> R): List<R>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### mapIndexed\n\nReturns a list containing the results of applying the given transform function to each element and its index in the original array.\n\n`fun <R> UIntArray.mapIndexed(    transform: (index: Int, UInt) -> R): List<R>`\n\nReturns a list containing the results of applying the given transform function to each element and its index in the original collection.\n\n`fun <T, R> Iterable<T>.mapIndexed(    transform: (index: Int, T) -> R): List<R>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### mapIndexedNotNull\n\nReturns a list containing only the non-null results of applying the given transform function to each element and its index in the original collection.\n\n`fun <T, R : Any> Iterable<T>.mapIndexedNotNull(    transform: (index: Int, T) -> R?): List<R>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### mapIndexedNotNullTo\n\nApplies the given transform function to each element and its index in the original collection and appends only the non-null results to the given destination.\n\n`fun <T, R : Any, C : MutableCollection<in R>> Iterable<T>.mapIndexedNotNullTo(    destination: C,     transform: (index: Int, T) -> R?): C`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### mapIndexedTo\n\nApplies the given transform function to each element and its index in the original array and appends the results to the given destination.\n\n`fun <R, C : MutableCollection<in R>> UIntArray.mapIndexedTo(    destination: C,     transform: (index: Int, UInt) -> R): C`\n\nApplies the given transform function to each element and its index in the original collection and appends the results to the given destination.\n\n`fun <T, R, C : MutableCollection<in R>> Iterable<T>.mapIndexedTo(    destination: C,     transform: (index: Int, T) -> R): C`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### mapNotNull\n\nReturns a list containing only the non-null results of applying the given transform function to each element in the original collection.\n\n`fun <T, R : Any> Iterable<T>.mapNotNull(    transform: (T) -> R?): List<R>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### mapNotNullTo\n\nApplies the given transform function to each element in the original collection and appends only the non-null results to the given destination.\n\n`fun <T, R : Any, C : MutableCollection<in R>> Iterable<T>.mapNotNullTo(    destination: C,     transform: (T) -> R?): C`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### mapTo\n\nApplies the given transform function to each element of the original array and appends the results to the given destination.\n\n`fun <R, C : MutableCollection<in R>> UIntArray.mapTo(    destination: C,     transform: (UInt) -> R): C`\n\nApplies the given transform function to each element of the original collection and appends the results to the given destination.\n\n`fun <T, R, C : MutableCollection<in R>> Iterable<T>.mapTo(    destination: C,     transform: (T) -> R): C`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### max\n\nReturns the largest element or `null` if there are no elements.\n\n`fun UIntArray.max(): UInt?`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### maxBy\n\nReturns the first element yielding the largest value of the given function or `null` if there are no elements.\n\n`fun <R : Comparable<R>> UIntArray.maxBy(    selector: (UInt) -> R): UInt?`\n`fun <T, R : Comparable<R>> Iterable<T>.maxBy(    selector: (T) -> R): T?`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### maxWith\n\nReturns the first element having the largest value according to the provided comparator or `null` if there are no elements.\n\n`fun UIntArray.maxWith(comparator: Comparator<in UInt>): UInt?`\n`fun <T> Iterable<T>.maxWith(comparator: Comparator<in T>): T?`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### min\n\nReturns the smallest element or `null` if there are no elements.\n\n`fun UIntArray.min(): UInt?`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### minBy\n\nReturns the first element yielding the smallest value of the given function or `null` if there are no elements.\n\n`fun <R : Comparable<R>> UIntArray.minBy(    selector: (UInt) -> R): UInt?`\n`fun <T, R : Comparable<R>> Iterable<T>.minBy(    selector: (T) -> R): T?`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### minus\n\nReturns a list containing all elements of the original collection without the first occurrence of the given element.\n\n`operator fun <T> Iterable<T>.minus(element: T): List<T>`\n\nReturns a list containing all elements of the original collection except the elements contained in the given elements array.\n\n`operator fun <T> Iterable<T>.minus(    elements: Array<out T>): List<T>`\n\nReturns a list containing all elements of the original collection except the elements contained in the given elements collection.\n\n`operator fun <T> Iterable<T>.minus(    elements: Iterable<T>): List<T>`\n\nReturns a list containing all elements of the original collection except the elements contained in the given elements sequence.\n\n`operator fun <T> Iterable<T>.minus(    elements: Sequence<T>): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### minusElement\n\nReturns a list containing all elements of the original collection without the first occurrence of the given element.\n\n`fun <T> Iterable<T>.minusElement(element: T): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### minWith\n\nReturns the first element having the smallest value according to the provided comparator or `null` if there are no elements.\n\n`fun UIntArray.minWith(comparator: Comparator<in UInt>): UInt?`\n`fun <T> Iterable<T>.minWith(comparator: Comparator<in T>): T?`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### none\n\nReturns `true` if the array has no elements.\n\n`fun UIntArray.none(): Boolean`\n\nReturns `true` if no elements match the given predicate.\n\n`fun UIntArray.none(predicate: (UInt) -> Boolean): Boolean`\n`fun <T> Iterable<T>.none(predicate: (T) -> Boolean): Boolean`\nCommon\nJVM\nJS\nNative\n1.1\n\n#### onEach\n\nPerforms the given action on each element and returns the collection itself afterwards.\n\n`fun <T, C : Iterable<T>> C.onEach(action: (T) -> Unit): C`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### orEmpty\n\nReturns this Collection if it's not `null` and the empty list otherwise.\n\n`fun <T> Collection<T>?.orEmpty(): Collection<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### partition\n\nSplits the original collection into pair of lists, where first list contains elements for which predicate yielded `true`, while second list contains elements for which predicate yielded `false`.\n\n`fun <T> Iterable<T>.partition(    predicate: (T) -> Boolean): Pair<List<T>, List<T>>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### plus\n\nReturns an array containing all elements of the original array and then the given element.\n\n`operator fun UIntArray.plus(element: UInt): UIntArray`\n\nReturns an array containing all elements of the original array and then all elements of the given elements collection.\n\n`operator fun UIntArray.plus(    elements: Collection<UInt>): UIntArray`\n\nReturns an array containing all elements of the original array and then all elements of the given elements array.\n\n`operator fun UIntArray.plus(elements: UIntArray): UIntArray`\n\nReturns a list containing all elements of the original collection and then the given element.\n\n`operator fun <T> Iterable<T>.plus(element: T): List<T>`\n`operator fun <T> Collection<T>.plus(element: T): List<T>`\n\nReturns a list containing all elements of the original collection and then all elements of the given elements array.\n\n`operator fun <T> Iterable<T>.plus(    elements: Array<out T>): List<T>`\n`operator fun <T> Collection<T>.plus(    elements: Array<out T>): List<T>`\n\nReturns a list containing all elements of the original collection and then all elements of the given elements collection.\n\n`operator fun <T> Iterable<T>.plus(    elements: Iterable<T>): List<T>`\n`operator fun <T> Collection<T>.plus(    elements: Iterable<T>): List<T>`\n\nReturns a list containing all elements of the original collection and then all elements of the given elements sequence.\n\n`operator fun <T> Iterable<T>.plus(    elements: Sequence<T>): List<T>`\n`operator fun <T> Collection<T>.plus(    elements: Sequence<T>): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### plusElement\n\nReturns a list containing all elements of the original collection and then the given element.\n\n`fun <T> Iterable<T>.plusElement(element: T): List<T>`\n`fun <T> Collection<T>.plusElement(element: T): List<T>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### random\n\nReturns a random element from this array.\n\n`fun UIntArray.random(): UInt`\n\nReturns a random element from this array using the specified source of randomness.\n\n`fun UIntArray.random(random: Random): UInt`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### reduce\n\nAccumulates value starting with the first element and applying operation from left to right to current accumulator value and each element.\n\n`fun UIntArray.reduce(    operation: (acc: UInt, UInt) -> UInt): UInt`\n`fun <S, T : S> Iterable<T>.reduce(    operation: (acc: S, T) -> S): S`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### reduceIndexed\n\nAccumulates value starting with the first element and applying operation from left to right to current accumulator value and each element with its index in the original array.\n\n`fun UIntArray.reduceIndexed(    operation: (index: Int, acc: UInt, UInt) -> UInt): UInt`\n\nAccumulates value starting with the first element and applying operation from left to right to current accumulator value and each element with its index in the original collection.\n\n`fun <S, T : S> Iterable<T>.reduceIndexed(    operation: (index: Int, acc: S, T) -> S): S`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### reduceRight\n\nAccumulates value starting with last element and applying operation from right to left to each element and current accumulator value.\n\n`fun UIntArray.reduceRight(    operation: (UInt, acc: UInt) -> UInt): UInt`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### reduceRightIndexed\n\nAccumulates value starting with last element and applying operation from right to left to each element with its index in the original array and current accumulator value.\n\n`fun UIntArray.reduceRightIndexed(    operation: (index: Int, UInt, acc: UInt) -> UInt): UInt`\nNative\n1.3\n\n#### refTo\n\n`fun UIntArray.refTo(index: Int): CValuesRef<UIntVar>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### requireNoNulls\n\nReturns an original collection containing all the non-`null` elements, throwing an IllegalArgumentException if there are any `null` elements.\n\n`fun <T : Any> Iterable<T?>.requireNoNulls(): Iterable<T>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### reverse\n\nReverses elements in the array in-place.\n\n`fun UIntArray.reverse()`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### reversed\n\nReturns a list with elements in reversed order.\n\n`fun UIntArray.reversed(): List<UInt>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### reversedArray\n\nReturns an array with elements of this array in reversed order.\n\n`fun UIntArray.reversedArray(): UIntArray`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### shuffled\n\nReturns a new list with the elements of this list randomly shuffled using the specified random instance as the source of randomness.\n\n`fun <T> Iterable<T>.shuffled(random: Random): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### single\n\nReturns the single element, or throws an exception if the array is empty or has more than one element.\n\n`fun UIntArray.single(): UInt`\n\nReturns the single element matching the given predicate, or throws exception if there is no or more than one matching element.\n\n`fun UIntArray.single(predicate: (UInt) -> Boolean): UInt`\n`fun <T> Iterable<T>.single(predicate: (T) -> Boolean): T`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### singleOrNull\n\nReturns single element, or `null` if the array is empty or has more than one element.\n\n`fun UIntArray.singleOrNull(): UInt?`\n\nReturns the single element matching the given predicate, or `null` if element was not found or more than one element was found.\n\n`fun UIntArray.singleOrNull(    predicate: (UInt) -> Boolean): UInt?`\n`fun <T> Iterable<T>.singleOrNull(    predicate: (T) -> Boolean): T?`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### slice\n\nReturns a list containing elements at indices in the specified indices range.\n\n`fun UIntArray.slice(indices: IntRange): List<UInt>`\n\nReturns a list containing elements at specified indices.\n\n`fun UIntArray.slice(indices: Iterable<Int>): List<UInt>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### sliceArray\n\nReturns an array containing elements of this array at specified indices.\n\n`fun UIntArray.sliceArray(indices: Collection<Int>): UIntArray`\n\nReturns an array containing elements at indices in the specified indices range.\n\n`fun UIntArray.sliceArray(indices: IntRange): UIntArray`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### sort\n\nSorts the array in-place.\n\n`fun UIntArray.sort()`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### sortDescending\n\nSorts elements in the array in-place descending according to their natural sort order.\n\n`fun UIntArray.sortDescending()`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### sorted\n\nReturns a list of all elements sorted according to their natural sort order.\n\n`fun UIntArray.sorted(): List<UInt>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### sortedArray\n\nReturns an array with all elements of this array sorted according to their natural sort order.\n\n`fun UIntArray.sortedArray(): UIntArray`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### sortedArrayDescending\n\nReturns an array with all elements of this array sorted descending according to their natural sort order.\n\n`fun UIntArray.sortedArrayDescending(): UIntArray`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### sortedBy\n\nReturns a list of all elements sorted according to natural sort order of the value returned by specified selector function.\n\n`fun <T, R : Comparable<R>> Iterable<T>.sortedBy(    selector: (T) -> R?): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### sortedByDescending\n\nReturns a list of all elements sorted descending according to natural sort order of the value returned by specified selector function.\n\n`fun <T, R : Comparable<R>> Iterable<T>.sortedByDescending(    selector: (T) -> R?): List<T>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### sortedDescending\n\nReturns a list of all elements sorted descending according to their natural sort order.\n\n`fun UIntArray.sortedDescending(): List<UInt>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### sortedWith\n\nReturns a list of all elements sorted according to the specified comparator.\n\n`fun <T> Iterable<T>.sortedWith(    comparator: Comparator<in T>): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### subtract\n\nReturns a set containing all elements that are contained by this collection and not contained by the specified collection.\n\n`infix fun <T> Iterable<T>.subtract(    other: Iterable<T>): Set<T>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### sum\n\nReturns the sum of all elements in the array.\n\n`fun UIntArray.sum(): UInt`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### sumBy\n\nReturns the sum of all values produced by selector function applied to each element in the array.\n\n`fun UIntArray.sumBy(selector: (UInt) -> UInt): UInt`\n\nReturns the sum of all values produced by selector function applied to each element in the collection.\n\n`fun <T> Iterable<T>.sumBy(selector: (T) -> Int): Int`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### sumByDouble\n\nReturns the sum of all values produced by selector function applied to each element in the array.\n\n`fun UIntArray.sumByDouble(selector: (UInt) -> Double): Double`\n\nReturns the sum of all values produced by selector function applied to each element in the collection.\n\n`fun <T> Iterable<T>.sumByDouble(    selector: (T) -> Double): Double`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### take\n\nReturns a list containing first n elements.\n\n`fun UIntArray.take(n: Int): List<UInt>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### takeLast\n\nReturns a list containing last n elements.\n\n`fun UIntArray.takeLast(n: Int): List<UInt>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### takeLastWhile\n\nReturns a list containing last elements satisfying the given predicate.\n\n`fun UIntArray.takeLastWhile(    predicate: (UInt) -> Boolean): List<UInt>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### takeWhile\n\nReturns a list containing first elements satisfying the given predicate.\n\n`fun UIntArray.takeWhile(    predicate: (UInt) -> Boolean): List<UInt>`\n`fun <T> Iterable<T>.takeWhile(    predicate: (T) -> Boolean): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### toCollection\n\nAppends all elements to the given destination collection.\n\n`fun <T, C : MutableCollection<in T>> Iterable<T>.toCollection(    destination: C): C`\nNative\n1.3\n\n#### toCValues\n\n`fun UIntArray.toCValues(): CValues<UIntVar>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### toHashSet\n\nReturns a HashSet of all elements.\n\n`fun <T> Iterable<T>.toHashSet(): HashSet<T>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### toIntArray\n\nReturns an array of type IntArray, which is a copy of this array where each element is a signed reinterpretation of the corresponding element of this array.\n\n`fun UIntArray.toIntArray(): IntArray`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### toList\n\nReturns a List containing all elements.\n\n`fun <T> Iterable<T>.toList(): List<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### toMutableSet\n\nReturns a mutable set containing all distinct elements from the given collection.\n\n`fun <T> Iterable<T>.toMutableSet(): MutableSet<T>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### toSet\n\nReturns a Set of all elements.\n\n`fun <T> Iterable<T>.toSet(): Set<T>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### toTypedArray\n\nReturns a typed object array containing all of the elements of this primitive array.\n\n`fun UIntArray.toTypedArray(): Array<UInt>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### toUIntArray\n\nReturns an array of UInt containing all of the elements of this collection.\n\n`fun Collection<UInt>.toUIntArray(): UIntArray`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### union\n\nReturns a set containing all distinct elements from both collections.\n\n`infix fun <T> Iterable<T>.union(other: Iterable<T>): Set<T>`\nCommon\nJVM\nJS\nNative\n1.2\n\n#### windowed\n\nReturns a list of snapshots of the window of the given size sliding along this collection with the given step, where each snapshot is a list.\n\n`fun <T> Iterable<T>.windowed(    size: Int,     step: Int = 1,     partialWindows: Boolean = false): List<List<T>>`\n\nReturns a list of results of applying the given transform function to an each list representing a view over the window of the given size sliding along this collection with the given step.\n\n`fun <T, R> Iterable<T>.windowed(    size: Int,     step: Int = 1,     partialWindows: Boolean = false,     transform: (List<T>) -> R): List<R>`\nCommon\nJVM\nJS\nNative\n1.3\n\n#### withIndex\n\nReturns a lazy Iterable that wraps each element of the original array into an IndexedValue containing the index of that element and the element itself.\n\n`fun UIntArray.withIndex(): Iterable<IndexedValue<UInt>>`\nCommon\nJVM\nJS\nNative\n1.0\n\n#### zip\n\nReturns a list of pairs built from the elements of `this` array and the other array with the same index. The returned list has length of the shortest collection.\n\n`infix fun <R> UIntArray.zip(    other: Array<out R>): List<Pair<UInt, R>>`\n`infix fun UIntArray.zip(    other: UIntArray): List<Pair<UInt, UInt>>`\n\nReturns a list of values built from the elements of `this` array and the other array with the same index using the provided transform function applied to each pair of elements. The returned list has length of the shortest collection.\n\n`fun <R, V> UIntArray.zip(    other: Array<out R>,     transform: (a: UInt, b: R) -> V): List<V>`\n\nReturns a list of pairs built from the elements of `this` collection and other array with the same index. The returned list has length of the shortest collection.\n\n`infix fun <R> UIntArray.zip(    other: Iterable<R>): List<Pair<UInt, R>>`\n\nReturns a list of values built from the elements of `this` array and the other collection with the same index using the provided transform function applied to each pair of elements. The returned list has length of the shortest collection.\n\n`fun <R, V> UIntArray.zip(    other: Iterable<R>,     transform: (a: UInt, b: R) -> V): List<V>`\n\nReturns a list of values built from the elements of `this` array and the other array with the same index using the provided transform function applied to each pair of elements. The returned list has length of the shortest array.\n\n`fun <V> UIntArray.zip(    other: UIntArray,     transform: (a: UInt, b: UInt) -> V): List<V>`\n\nReturns a list of pairs built from the elements of `this` collection and the other array with the same index. The returned list has length of the shortest collection.\n\n`infix fun <T, R> Iterable<T>.zip(    other: Array<out R>): List<Pair<T, R>>`\n\nReturns a list of values built from the elements of `this` collection and the other array with the same index using the provided transform function applied to each pair of elements. The returned list has length of the shortest collection.\n\n`fun <T, R, V> Iterable<T>.zip(    other: Array<out R>,     transform: (a: T, b: R) -> V): List<V>`\n\nReturns a list of pairs built from the elements of `this` collection and other collection with the same index. The returned list has length of the shortest collection.\n\n`infix fun <T, R> Iterable<T>.zip(    other: Iterable<R>): List<Pair<T, R>>`\n\nReturns a list of values built from the elements of `this` collection and the other collection with the same index using the provided transform function applied to each pair of elements. The returned list has length of the shortest collection.\n\n`fun <T, R, V> Iterable<T>.zip(    other: Iterable<R>,     transform: (a: T, b: R) -> V): List<V>`\nCommon\nJVM\nJS\nNative\n1.2\n\n#### zipWithNext\n\nReturns a list of pairs of each two adjacent elements in this collection.\n\n`fun <T> Iterable<T>.zipWithNext(): List<Pair<T, T>>`\n\nReturns a list containing the results of applying the given transform function to an each pair of two adjacent elements in this collection.\n\n`fun <T, R> Iterable<T>.zipWithNext(    transform: (a: T, b: T) -> R): List<R>`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6273408,"math_prob":0.75656414,"size":19501,"snap":"2019-51-2020-05","text_gpt3_token_len":4314,"char_repetition_ratio":0.21480228,"word_repetition_ratio":0.5536936,"special_character_ratio":0.20645095,"punctuation_ratio":0.14037123,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95439947,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T07:47:11Z\",\"WARC-Record-ID\":\"<urn:uuid:d46980af-ff3f-4fcc-be52-00379efce518>\",\"Content-Length\":\"518090\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f9eef428-81ab-4da6-9648-6c6438bcb93d>\",\"WARC-Concurrent-To\":\"<urn:uuid:e472473e-dadc-41fb-a8fc-b7596633f0e1>\",\"WARC-IP-Address\":\"99.84.181.113\",\"WARC-Target-URI\":\"https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-u-int-array/index.html\",\"WARC-Payload-Digest\":\"sha1:PEYRC5ZEGKGWUCY7JEQYCDL73M4H6OY6\",\"WARC-Block-Digest\":\"sha1:XC2V6CWKUB4IANBGSNNWDOWDFWW5UFKP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540551267.14_warc_CC-MAIN-20191213071155-20191213095155-00536.warc.gz\"}"}
https://tools.carboncollective.co/compound-interest/12358-at-21-percent-in-15-years/
[ "# What is the compound interest on $12358 at 21% over 15 years? If you want to invest$12,358 over 15 years, and you expect it will earn 21.00% in annual interest, your investment will have grown to become $215,639.71. If you're on this page, you probably already know what compound interest is and how a sum of money can grow at a faster rate each year, as the interest is added to the original principal amount and recalculated for each period. The actual rate that$12,358 compounds at is dependent on the frequency of the compounding periods. In this article, to keep things simple, we are using an annual compounding period of 15 years, but it could be monthly, weekly, daily, or even continuously compounding.\n\nThe formula for calculating compound interest is:\n\n$$A = P(1 + \\dfrac{r}{n})^{nt}$$\n\n• A is the amount of money after the compounding periods\n• P is the principal amount\n• r is the annual interest rate\n• n is the number of compounding periods per year\n• t is the number of years\n\nWe can now input the variables for the formula to confirm that it does work as expected and calculates the correct amount of compound interest.\n\nFor this formula, we need to convert the rate, 21.00% into a decimal, which would be 0.21.\n\n$$A = 12358(1 + \\dfrac{ 0.21 }{1})^{ 15}$$\n\nAs you can see, we are ignoring the n when calculating this to the power of 15 because our example is for annual compounding, or one period per year, so 15 × 1 = 15.\n\n## How the compound interest on $12,358 grows over time The interest from previous periods is added to the principal amount, and this grows the sum a rate that always accelerating. The table below shows how the amount increases over the 15 years it is compounding: Start Balance Interest End Balance 1$12,358.00 $2,595.18$14,953.18\n2 $14,953.18$3,140.17 $18,093.35 3$18,093.35 $3,799.60$21,892.95\n4 $21,892.95$4,597.52 $26,490.47 5$26,490.47 $5,563.00$32,053.47\n6 $32,053.47$6,731.23 $38,784.70 7$38,784.70 $8,144.79$46,929.48\n8 $46,929.48$9,855.19 $56,784.68 9$56,784.68 $11,924.78$68,709.46\n10 $68,709.46$14,428.99 $83,138.44 11$83,138.44 $17,459.07$100,597.52\n12 $100,597.52$21,125.48 $121,723.00 13$121,723.00 $25,561.83$147,284.83\n14 $147,284.83$30,929.81 $178,214.64 15$178,214.64 $37,425.07$215,639.71\n\nWe can also display this data on a chart to show you how the compounding increases with each compounding period.\n\nAs you can see if you view the compounding chart for $12,358 at 21.00% over a long enough period of time, the rate at which it grows increases over time as the interest is added to the balance and new interest calculated from that figure. ## How long would it take to double$12,358 at 21% interest?\n\nAnother commonly asked question about compounding interest would be to calculate how long it would take to double your investment of $12,358 assuming an interest rate of 21.00%. We can calculate this very approximately using the Rule of 72. The formula for this is very simple: $$Years = \\dfrac{72}{Interest\\: Rate}$$ By dividing 72 by the interest rate given, we can calculate the rough number of years it would take to double the money. Let's add our rate to the formula and calculate this: $$Years = \\dfrac{72}{ 21 } = 3.43$$ Using this, we know that any amount we invest at 21.00% would double itself in approximately 3.43 years. So$12,358 would be worth $24,716 in ~3.43 years. We can also calculate the exact length of time it will take to double an amount at 21.00% using a slightly more complex formula: $$Years = \\dfrac{log(2)}{log(1 + 0.21)} = 3.64\\; years$$ Here, we use the decimal format of the interest rate, and use the logarithm math function to calculate the exact value. As you can see, the exact calculation is very close to the Rule of 72 calculation, which is much easier to remember. Hopefully, this article has helped you to understand the compound interest you might achieve from investing$12,358 at 21.00% over a 15 year investment period." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92741895,"math_prob":0.99882317,"size":3942,"snap":"2023-40-2023-50","text_gpt3_token_len":1192,"char_repetition_ratio":0.14372778,"word_repetition_ratio":0.015360983,"special_character_ratio":0.38229325,"punctuation_ratio":0.17868675,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99990857,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T21:39:03Z\",\"WARC-Record-ID\":\"<urn:uuid:255e7aea-b3a1-4d6e-98a0-da0fb4ab236e>\",\"Content-Length\":\"27017\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1343a182-295d-42cf-b189-04484f263e7d>\",\"WARC-Concurrent-To\":\"<urn:uuid:94463359-a021-4700-b4d0-accf4bfabfae>\",\"WARC-IP-Address\":\"138.197.3.89\",\"WARC-Target-URI\":\"https://tools.carboncollective.co/compound-interest/12358-at-21-percent-in-15-years/\",\"WARC-Payload-Digest\":\"sha1:RQBWGMK3XL7Q4FDSUMCXGXYI4GPJAMZW\",\"WARC-Block-Digest\":\"sha1:LVUNUZEXBAAIEML23HKOS5JBQIQPSPKA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510941.58_warc_CC-MAIN-20231001205332-20231001235332-00320.warc.gz\"}"}
http://tutorace.com/html/hs_exit_exam_adult_dvd.html
[ "", null, "High School Exit Exam Math Prep\n\nThe High School Exit Exam Math Prep is a 8 video series on DVD targeted for students in grades 9, 10 and 11 preparing for their high school exit exam. The series is based on NCTM and state standards and offers multiple hours of video instruction on topics that a student would find on a high school exit exam. A concentration of topics covered is represented in the pie chart below. An accompanying study guide (purchased separately) with blackline masters provides a place for students to document and reinforce lessons and offers additional pencil and paper practice problems. Problem solving, test taking skills and underlying processes are woven throughout the 5 sections of the series.", null, "High School Exit Exam Math Prep DVD Series\n8 DVDs\n\nDVD 1 – Percent and Order of Operations\n\n• Finding Percent Algebraically\n• Percent Increase\n• Percent Decrease\n• Order of Operations\n• Distributive Property\n• Order of Operations with Numbers\n\nDVD 2 - Estimation and Sequences\n\n• Estimation\n• Using Estimation in an Application I\n• Using Estimation in an Application II\n• Operations on Integer Values\n• Application on Proportions\n• Sequences\n• Finding the nth Term of a Geometric Pattern I\n• Finding the nth Term of a Geometric Pattern II\n• Finding the nth Term of a Algebraic Pattern\n• Define Linear Sequence\n• Define Geometric and Exponential Sequences\n• Develop a Formula to Find Missing Values in a Dependent Sequence\n• Use Proportions to Find Missing Values in a Dependent Sequence\n\nDVD 3 - Graphing Linear Equations\n\n• Linear Equations\n• Relationship between a Graph and a Table of Values\n• Relationship between a Graph and an Equation\n• Linear Equation\n• Equation of Line\n• Equation of a Line Given 2 Points on the Line\n• Find the Equation of a Line from a Graph\n• Finding Points of Intersection\n• Find the Point of Intersection Graphically\n• Find the Point of Intersection using Tables of Values\n• Systems of Equations\n\nDVD 4 - Solving Equations\n\n• Variables and Exponents\n• Translating Written Expressions using Variables\n• Exponents\n• Solving Equations\n• Variable Substitution in Expressions and Solving Linear Equations\n• Solving Linear Equations with the Distributive Property\n• Solving Linear Equations\n• Solving Linear Equations using a Calculator I\n• Solving Linear Equations using a Calculator II\n\nDVD 5 - Angles, Polygons, and Similar Triangles\n\n• Transversals, Quadrilaterals, and Polygons\n• Transversals and Parallel Lines\n• Polygons\n• Triangles\n• Triangles\n• Similar Triangles\n• Similar Triangle Applications\n• Typical Circumstance using Similar Triangles\n\nDVD 6 - The Pythagorean Theorem, and Rectangle Problems\n\n• Pythagorean Theorem\n• Pythagorean Theorem I\n• Pythagorean Theorem II\n• Using the Pythagorean Theorem to find the Distance Between Two Points\n• Pythagorean Theorem Application I\n• Pythagorean Theorem Application II\n• Rectangles\n• Rectangles\n• Rectangles and Proportions I\n• Rectangles and Proportions II\n\nDVD 7 - Measurement\n\n• Circles\n• Terminology of Circles\n• Circumference Application\n• Area of a Circle\n• Angles of a Triangle\n• Perimeter and Area\n• Three-Dimensional Shapes\n• Prisms and Volume\n• Right Circular Cylinder\n• Right Pyramids and Cones\n• Using Algebra to Solve Measurement Applications\n• Rate of Growth\n• Distance\n• Time, Rate, and Cost I\n• Time, Rate, and Cost II\n\nDVD 8 - Data Analysis and Probability\n\n• Pie Charts, Bar Graphs, and Line Graphs\n• Pie Charts\n• Reading Circle Graphs\n• Histograms and Bar Graphs\n• Line Graphs\n• Reading Data and Charts\n• Survey Data\n• Reading a Chart\n• Probability\n• Concept of Probability\n• Independent & Dependent Events\n• Tree Diagrams\n• Geometric Probability\n\nTo Order, Contact:\n\nVideo Resources Software\n1-888-223-6284\n\n11767 South Dixie Highway\nMiami, FL 33156\[email protected]" ]
[ null, "http://tutorace.com/assets/images/acemath_logo.jpg", null, "http://tutorace.com/assets/images/piechart.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7596088,"math_prob":0.8361982,"size":3307,"snap":"2019-13-2019-22","text_gpt3_token_len":789,"char_repetition_ratio":0.13563427,"word_repetition_ratio":0.0970696,"special_character_ratio":0.20713638,"punctuation_ratio":0.036072146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99908423,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-19T10:31:30Z\",\"WARC-Record-ID\":\"<urn:uuid:2a9c746d-9b61-4011-b9f4-c99198827a4b>\",\"Content-Length\":\"67021\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3c621e87-8181-4363-bfcb-fafcf6e46dc7>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9afd112-77e0-4234-af5a-81eaec636ee3>\",\"WARC-IP-Address\":\"64.71.33.185\",\"WARC-Target-URI\":\"http://tutorace.com/html/hs_exit_exam_adult_dvd.html\",\"WARC-Payload-Digest\":\"sha1:FCKHICFN27Q6X6IG6OWTIQLBFQCROS3T\",\"WARC-Block-Digest\":\"sha1:YUB4CJK56IBEXZQ5HFA7EBDLZY6LLEOC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912201953.19_warc_CC-MAIN-20190319093341-20190319115341-00345.warc.gz\"}"}
https://studyres.com/doc/8066977/csce590-822-data-mining-principles-and-applications
[ "• Study Resource\n• Explore\n\nSurvey\n\n* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project\n\nDocument related concepts\n\nNonlinear dimensionality reduction wikipedia, lookup\n\nK-nearest neighbors algorithm wikipedia, lookup\n\nSupport vector machine wikipedia, lookup\n\nTranscript\n```CSCE822 Data Mining and\nWarehousing\nLecture 12\nSupport Vector Machines\nDr. Jianjun Hu\nmleg.cse.sc.edu/edu/csce822\nUniversity of South Carolina\nDepartment of Computer Science and Engineering\nOverview\n Intro. to Support Vector Machines (SVM)\n Properties of SVM\n Applications\n Gene Expression Data Classification\n Text Categorization if time permits\n Discussion\nLinear Classifiers\nx\ndenotes +1\nw x + b>0\na\nf\nyest\nf(x,w,b) = sign(w x + b)\ndenotes -1\nHow would you\nclassify this data?\nw x + b<0\nLinear Classifiers\nx\ndenotes +1\na\nf\nyest\nf(x,w,b) = sign(w x + b)\ndenotes -1\nHow would you\nclassify this data?\nLinear Classifiers\nx\ndenotes +1\na\nf\nyest\nf(x,w,b) = sign(w x + b)\ndenotes -1\nHow would you\nclassify this data?\nLinear Classifiers\nx\ndenotes +1\na\nf\nyest\nf(x,w,b) = sign(w x + b)\ndenotes -1\nAny of these\nwould be fine..\n..but which is\nbest?\nLinear Classifiers\nx\ndenotes +1\na\nf\nyest\nf(x,w,b) = sign(w x + b)\ndenotes -1\nHow would you\nclassify this data?\nMisclassified\nto +1 class\nClassifier Margin\nx\ndenotes +1\ndenotes -1\na\nf\nyest\nf(x,w,b) = sign(w x + b)\nDefine the margin\nof a linear\nclassifier as the\nwidth that the\nboundary could be\nincreased by\nbefore hitting a\ndatapoint.\nMaximum Margin\nx\ndenotes +1\ndenotes -1\nSupport Vectors\nare those\ndatapoints that\nthe margin\npushes up\nagainst\na\nf\nyest\n1. Maximizing the margin is good\naccordingf(x,w,b)\nto intuition\nand PAC\n= sign(w\nx +theory\nb)\n2. Implies that only support vectors are\nimportant; other The\ntraining\nexamples\nmaximum\nare ignorable.\nmargin linear\n3. Empirically it works\nvery very\nclassifier\niswell.\nthe\nlinear classifier\nwith the, um,\nmaximum margin.\nLinear SVM\nThis is the\nsimplest kind of\nSVM (Called an\nLSVM)\nLinear SVM Mathematically\nx+\nM=Margin Width\nX-\nWhat we know:\n w . x+ + b = +1\n w . x- + b = -1\n w . (x+-x-) = 2\n\n\n(x  x )  w 2\nM \n\nw\nw\nLinear SVM Mathematically\n\nGoal: 1) Correctly classify all training data\nwxi  b  1 if yi = +1\nwxi  b  1 if yi = -1\nyi (wxi  b)  1 for all i 2\nM\n2) Maximize the Margin\n1 t w\nww\nsame as minimize\n2\n\nWe can formulate a Quadratic Optimization Problem and solve for w and b\n\n1 t\nMinimize  ( w)  w w\n2\nsubject to\nyi (wxi  b)  1\ni\nSolving the Optimization Problem\n\n\n\nFind w and b such that\nΦ(w) =½ wTw is minimized;\nand for all {(xi ,yi)}: yi (wTxi + b) ≥ 1\nNeed to optimize a quadratic function subject to linear\nconstraints.\nQuadratic optimization problems are a well-known class of\nmathematical programming problems, and many (rather\nintricate) algorithms exist for solving them.\nThe solution involves constructing a dual problem where a\nLagrange multiplier αi is associated with every constraint in the\nprimary problem:\nFind α1…αN such that\nQ(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and\n(1) Σαiyi = 0\n(2) αi ≥ 0 for all αi\nThe Optimization Problem Solution\n\nThe solution has the form:\nw =Σαiyixi\nb= yk- wTxk for any xk such that αk 0\n\nEach non-zero αi indicates that corresponding xi is a\nsupport vector.\nThen the classifying function will have the form:\nf(x) = ΣαiyixiTx + b\n\n\n\nNotice that it relies on an inner product between the test\npoint x and the support vectors xi – we will return to this\nlater.\nAlso keep in mind that solving the optimization problem\ninvolved computing the inner products xiTxj between all\npairs of training points.\nDataset with noise\ndenotes +1\n\nHard Margin: So far we require\nall data points be classified correctly\ndenotes -1\n- No training error\n\nWhat if the training set is\nnoisy?\n- Solution 1: use very powerful\nkernels\nOVERFITTING!\nSoft Margin Classification\nSlack variables ξi can be added to allow\nmisclassification of difficult or noisy examples.\ne2\ne11\noptimization criterion be?\nMinimize\nR\ne7\n1\nw.w  C  ε k\n2\nk 1\nHard Margin v.s. Soft Margin\n\nThe old formulation:\nFind w and b such that\nΦ(w) =½ wTw is minimized and for all {(xi ,yi)}\nyi (wTxi + b) ≥ 1\n\nThe new formulation incorporating slack variables:\nFind w and b such that\nΦ(w) =½ wTw + CΣξi is minimized and for all {(xi ,yi)}\nyi (wTxi + b) ≥ 1- ξi and ξi ≥ 0 for all i\n\nParameter C can be viewed as a way to control\noverfitting.\nLinear SVMs: Overview\n\n\n\n\nThe classifier is a separating hyperplane.\nMost “important” training points are support vectors; they\ndefine the hyperplane.\nQuadratic optimization algorithms can identify which training\npoints xi are support vectors with non-zero Lagrangian\nmultipliers αi.\nBoth in the dual formulation of the problem and in the solution\ntraining points appear only inside dot products:\nFind α1…αN such that\nQ(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and\n(1) Σαiyi = 0\n(2) 0 ≤ αi ≤ C for all αi\nf(x) = ΣαiyixiTx + b\nNon-linear SVMs\n\nDatasets that are linearly separable with some noise\nwork out great:\nx\n0\n\nBut what are we going to do if the dataset is just too hard?\nx\n0\n\nHow about… mapping data to a higher-dimensional\nspace:\nx2\n0\nx\nNon-linear SVMs: Feature spaces\n\nGeneral idea: the original input space can always be\nmapped to some higher-dimensional feature space\nwhere the training set is separable:\nΦ: x → φ(x)\nThe “Kernel Trick”\nThe linear classifier relies on dot product between vectors K(xi,xj)=xiTxj\n If every data point is mapped into high-dimensional space via some\ntransformation Φ: x → φ(x), the dot product becomes:\nK(xi,xj)= φ(xi) Tφ(xj)\n A kernel function is some function that corresponds to an inner product in\nsome expanded feature space.\n Example:\n2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2,\nNeed to show that K(xi,xj)= φ(xi) Tφ(xj):\nK(xi,xj)=(1 + xiTxj)2,\n= 1+ xi12xj12 + 2 xi1xj1 xi2xj2+ xi22xj22 + 2xi1xj1 + 2xi2xj2\n= [1 xi12 √2 xi1xi2 xi22 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj22 √2xj1 √2xj2]\n= φ(xi) Tφ(xj), where φ(x) = [1 x12 √2 x1x2 x22 √2x1 √2x2]\n\nWhat Functions are Kernels?\n\nFor some functions K(xi,xj) checking that\nK(xi,xj)= φ(xi) Tφ(xj) can be cumbersome.\n Mercer’s theorem:\nEvery semi-positive definite symmetric function is a kernel\n Semi-positive definite symmetric functions correspond to a\nsemi-positive definite symmetric Gram matrix:\nK=\nK(x1,x1) K(x1,x2) K(x1,x3)\nK(x2,x1) K(x2,x2) K(x2,x3)\n…\n…\n…\nK(xN,x1) K(xN,x2) K(xN,x3)\n…\n…\n…\nK(x1,xN)\nK(x2,xN)\n…\nK(xN,xN)\nExamples of Kernel Functions\n\nLinear: K(xi,xj)= xi Txj\n\nPolynomial of power p: K(xi,xj)= (1+ xi Txj)p\n\nK (xi , x j )  exp( \n\nxi  x j\n2\n2\n2\n)\nSigmoid: K(xi,xj)= tanh(β0xi Txj + β1)\nNon-linear SVMs Mathematically\n\nDual problem formulation:\nFind α1…αN such that\nQ(α) =Σαi - ½ΣΣαiαjyiyjK(xi, xj) is maximized and\n(1) Σαiyi = 0\n(2) αi ≥ 0 for all αi\n\nThe solution is:\nf(x) = ΣαiyiK(xi, xj)+ b\n\nOptimization techniques for finding αi’s remain the same!\nNonlinear SVM - Overview\n\n\n\nSVM locates a separating hyperplane in the\nfeature space and classify points in that\nspace\nIt does not need to represent the space\nexplicitly, simply by defining a kernel\nfunction\nThe kernel function plays the role of the dot\nproduct in the feature space.\nProperties of SVM\n Flexibility in choosing a similarity function\n Sparseness of solution when dealing with large data sets\n- only support vectors are used to specify the separating hyperplane\n Ability to handle large feature spaces\n- complexity does not depend on the dimensionality of the feature space\n Overfitting can be controlled by soft margin approach\n Nice math property: a simple convex optimization problem which\nis guaranteed to converge to a single global solution\n Feature Selection\nSVM Applications\n SVM has been used successfully in many real-world\nproblems\n- text (and hypertext) categorization\n- image classification\n- bioinformatics (Protein classification,\nCancer classification)\n- hand-written character recognition\nApplication 1: Cancer Classification\n High Dimensional\n- p>1000; n<100\nGenes\nPatients\ng-1\ng-2\n……\ng-p\nP-1\n Imbalanced\n- less positive samples\np-2\n…….\np-n\nn\nK [ x, x ]  k ( x , x )  \nN\n Many irrelevant features\nFEATURE SELECTION\n Noisy\nIn the linear case,\nwi2 gives the ranking of dim i\nSVM is sensitive to noisy (mis-labeled) data \nWeakness of SVM\n It is sensitive to noise\n- A relatively small number of mislabeled examples can dramatically\ndecrease the performance\n It only considers two classes\n- how to do multi-class classification with SVM?\n1) with output arity m, learn m SVM’s\n SVM 1 learns “Output==1” vs “Output != 1”\n SVM 2 learns “Output==2” vs “Output != 2”\n :\n SVM m learns “Output==m” vs “Output != m”\n2)To predict the output for a new input, just predict with each SVM\nand find out which one puts the prediction the furthest into the positive\nregion.\nApplication 2: Text Categorization\n Task: The classification of natural text (or hypertext)\ndocuments into a fixed number of predefined\ncategories based on their content.\n- email filtering, web searching, sorting documents by\ntopic, etc..\n A document can be assigned to more than one\ncategory, so this can be viewed as a series of binary\nclassification problems, one for each category\nRepresentation of Text\nIR’s vector space model (aka bag-of-words representation)\n A doc is represented by a vector indexed by a pre-fixed\nset or dictionary of terms\n Values of an entry can be binary or weights\n\n\nNormalization, stop words, word stems\nDoc x => φ(x)\nText Categorization using SVM\n The distance between two documents is φ(x)·φ(z)\n K(x,z) = 〈φ(x)·φ(z) is a valid kernel, SVM can be used\nwith K(x,z) for discrimination.\n Why SVM?\n-High dimensional input space\n-Few irrelevant features (dense concept)\n-Sparse document vectors (sparse instances)\n-Text categorization problems are linearly separable\nSome Issues\n Choice of kernel\n- Gaussian or polynomial kernel is default\n- if ineffective, more elaborate kernels are needed\n- domain experts can give assistance in formulating appropriate\nsimilarity measures\n Choice of kernel parameters\n- e.g. σ in Gaussian kernel\n- σ is the distance between closest points with different classifications\n- In the absence of reliable criteria, applications rely on the use of a\nvalidation set or cross-validation to set such parameters.\n Optimization criterion – Hard margin v.s. Soft margin\n- a lengthy series of experiments in which various parameters are tested\n An excellent tutorial on VC-dimension and Support Vector\nMachines:\nC.J.C. Burges. A tutorial on support vector machines for pattern\nrecognition. Data Mining and Knowledge Discovery, 2(2):955-974, 1998.\n The VC/SRM/SVM Bible:\nStatistical Learning Theory by Vladimir Vapnik, Wiley-Interscience;\n1998\nhttp://www.kernel-machines.org/\nReference\n Support Vector Machine Classification of\nMicroarray Gene Expression Data, Michael P. S.\nBrown William Noble Grundy, David Lin, Nello\nCristianini, Charles Sugnet, Manuel Ares, Jr., David\nHaussler\n www.cs.utexas.edu/users/mooney/cs391L/svm.ppt\n Text categorization with Support Vector Machines:\nlearning with many relevant features\nT. Joachims, ECML - 98\n```\nRelated documents" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8036543,"math_prob":0.9610608,"size":10788,"snap":"2019-51-2020-05","text_gpt3_token_len":2847,"char_repetition_ratio":0.11377967,"word_repetition_ratio":0.102649,"special_character_ratio":0.25055617,"punctuation_ratio":0.097665556,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.994518,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T01:41:52Z\",\"WARC-Record-ID\":\"<urn:uuid:c0e9cace-841e-4f13-8587-fe8e821adef3>\",\"Content-Length\":\"101605\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f4f8d712-7d2a-4c1b-ad4f-41a48c2e3fcd>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f19ca0b-4cd5-4114-b974-f270c0e77f78>\",\"WARC-IP-Address\":\"104.28.21.25\",\"WARC-Target-URI\":\"https://studyres.com/doc/8066977/csce590-822-data-mining-principles-and-applications\",\"WARC-Payload-Digest\":\"sha1:GIU2X5HACHPDMXLFV3CSNOOBB2QP4PKP\",\"WARC-Block-Digest\":\"sha1:6C4744A6OJ3FR7SAKZZYB5TH4LZQPMOK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251684146.65_warc_CC-MAIN-20200126013015-20200126043015-00163.warc.gz\"}"}
https://findanexpert.unimelb.edu.au/scholarlywork/1122403-two-dimensional-anisotropic-spiral-self-avoiding-walks
[ "Journal article\n\n# Two-dimensional anisotropic spiral self-avoiding walks\n\nAJ Guttmann, KJ Wallace\n\nJournal of Physics A: Mathematical and General | Published : 1986\n\n#### Abstract\n\nTwo models of anisotropic spiral self-avoiding random walks recently proposed by Manna (1984) have been investigated by the method of exact series expansions. The number of such n-step walks, cn, appears to behave like cn approximately constant* mu nnbeta exp( alpha square root n) where both mu , which is known exactly, and the constant factor are model dependent but alpha approximately=0.14 and beta approximately=0.9 appear to be model independent. The mean square end-to-end distance exponent nu =0.855+or-0.02 for both models." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89851916,"math_prob":0.77344376,"size":703,"snap":"2021-31-2021-39","text_gpt3_token_len":161,"char_repetition_ratio":0.09298999,"word_repetition_ratio":0.0,"special_character_ratio":0.21906117,"punctuation_ratio":0.10852713,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96778333,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-27T09:06:08Z\",\"WARC-Record-ID\":\"<urn:uuid:dbb40cef-cabf-41e1-8302-3bed6c5212ea>\",\"Content-Length\":\"34882\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a583e30a-c488-43a3-befd-7f5d2082f4eb>\",\"WARC-Concurrent-To\":\"<urn:uuid:8400f205-5847-4fbe-92a6-d9a0f74adb10>\",\"WARC-IP-Address\":\"65.8.42.77\",\"WARC-Target-URI\":\"https://findanexpert.unimelb.edu.au/scholarlywork/1122403-two-dimensional-anisotropic-spiral-self-avoiding-walks\",\"WARC-Payload-Digest\":\"sha1:PH5A43LNR6WPOJTA6UYLWCNSOBGCN6FH\",\"WARC-Block-Digest\":\"sha1:IGKE3BDKDFCNCBZLQELW5BOFTRWS2MGN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153223.30_warc_CC-MAIN-20210727072531-20210727102531-00414.warc.gz\"}"}
https://rdrr.io/cran/aqp/man/addVolumeFraction.html
[ "# addVolumeFraction: Symbolize Volume Fraction on a Soil Profile Collection Plot In aqp: Algorithms for Quantitative Pedology\n\n## Symbolize Volume Fraction on a Soil Profile Collection Plot\n\n### Description\n\nSymbolize volume fraction on an existing soil profile collection plot.\n\n### Usage\n\n```addVolumeFraction(\nx,\ncolname,\nres = 10,\ncex.min = 0.1,\ncex.max = 0.5,\npch = 1,\ncol = \"black\"\n)\n```\n\n### Arguments\n\n `x` a `SoilProfileCollection` object `colname` character vector of length 1, naming the column containing volume fraction data (horizon-level attribute) `res` integer, resolution of the grid used to symbolize volume fraction `cex.min` minimum symbol size `cex.max` maximum symbol size `pch` integer, plotting character code `col` symbol color, either a single color or as many colors as there are horizons in `x`\n\n### Details\n\nThis function can only be called after plotting a `SoilProfileCollection` object. Details associated with a call to `plotSPC` are automatically accounted for within this function: e.g. `plot.order`, `width`, etc..\n\n### Author(s)\n\nD.E. Beaudette\n\n`plotSPC`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.692706,"math_prob":0.5816276,"size":975,"snap":"2023-14-2023-23","text_gpt3_token_len":252,"char_repetition_ratio":0.110195674,"word_repetition_ratio":0.0,"special_character_ratio":0.23076923,"punctuation_ratio":0.18232045,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9534154,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T22:21:00Z\",\"WARC-Record-ID\":\"<urn:uuid:80d68141-3292-4207-8d26-802b340ea9e2>\",\"Content-Length\":\"31668\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12d4307e-4f72-4e75-a2f1-6a8b6887342f>\",\"WARC-Concurrent-To\":\"<urn:uuid:25e1af70-937a-4366-8914-c01bd826a37a>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/aqp/man/addVolumeFraction.html\",\"WARC-Payload-Digest\":\"sha1:SSYHK32PTJQ436NGS44Y26QKN76PRFOI\",\"WARC-Block-Digest\":\"sha1:K7EOHQM6HPAB6BZKGVY5EHLFC5NB6VFO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653183.5_warc_CC-MAIN-20230606214755-20230607004755-00310.warc.gz\"}"}
https://www.deeconometrist.nl/econometrics/is-ols-a-thing-of-the-past/
[ "Select Page\nIs OLS a thing of the past?", null, "#### February 23, 2021\n\nOne of the most popular regression methods an econometrician learns is the Ordinary Least Squares (OLS). It is a simple and elegant way of estimating parameters in linear regression. However, there is another technique to perform linear regression using concepts from machine learning. This concept is called gradient boosting and is also related to decision trees.\n\n### Decision trees\n\nA decision tree uses a tree-like model of decisions and possible consequences. It is a very common data mining algorithm used for operations research, specifically in decision analysis. The idea behind it can easily be understood with a small example. Suppose we want to decide whether to take the bus or walk to a certain destination depending on certain parameters. These parameters could include: what the weather is like, how much time it takes to get there or whether you are hungry. Then a simple decision tree can be made as is shown in the figure below.", null, "Example of a decision tree\n\nDecision trees are also used for machine learning principles. This is called decision tree learning and it is a method commonly used in data mining. The goal is to create a model that predicts the value of a target variable based on several input variables. In our example, the target variable is whether to take the bus or walk and the input variables are the weather, the amount of time and whether we are hungry or not. The yellowish squares at the end of the decision tree are called leaves. These decision trees will be used when we will talk about gradient boosting.\n\nMoreover, decision trees can already actually be of really good use. This is due to the fact that these trees are simple to construct and understand. However, there are some limitations to the usage of these trees. For example, decision trees can be non-robust, meaning that a small change in the training data set can result in a significant change in the final prediction or the tree itself. A solution to this is to use multiple decision trees, which is an ensemble method. If you want to learn more about ensemble methods I mention them in one of my previous articles about machine learning in financial markets. Moreover, the ensemble method of using multiple decision trees is used in many boosting methods such as gradient boosting which is explained in the following section.\n\nGradient boosting for regression was developed in 1999 by Jerome H. Friedman. Like other boosting methods, boosting can be interpreted as an optimization algorithm on a suitable cost or loss function. The word gradient is used here because we will use the derivative of the loss function for optimization. So you may wonder why I was talking about decision trees before. This is simply because it is very common that gradient boosting is used with decision trees. This will become more clear when explaining the algorithm.\n\nIt is easiest to explain this in the least-squares regression setting, where we have a model", null, "that we want to “teach” to predict the values", null, "by minimizing the mean squared error ([expand title= MSE] The general formula for this is:", null, ", where", null, "is the predicted value and", null, "is the observed value. [/expand]). Here", null, "is simply the explanatory variables in our model. Now we want to “boost” this procedure with an algorithm that includes", null, "stages. At each stage", null, "(", null, ") suppose we have an imperfect model", null, "which we want to improve. This can be done by adding a new estimator to the model as such:\n\n(1)", null, "Or equivalently,\n\n(2)", null, "So gradient boosting will actually fit", null, "to the residual of the regression at each stage, where each", null, "attempts to correct the errors of its predecessor", null, "As mentioned earlier, the algorithm in least-squares setting tries to minimize the MSE at each stage of the learning process, which means that the loss function in our context is actually the MSE itself. There are other loss functions that could be used, but for the sake of simplicity, we will stick to this function. So we define our loss function to be:\n\n(3)", null, "The factor", null, "is used to make computations simpler. With this we have our input of the algorithm assuming we have a training data set", null, "Moving on to the first step of the algorithm, we must initialize a model with a constant value, which will look like the following:\n\n(4)", null, "where", null, "is simply the predicted value of our base model. The first step simply means that we need to find a", null, "that minimizes the loss function. So plugging in our loss function we would get:\n\n(5)", null, "That means that our initial predicted value is simply the sample average.\n\nNow we will move on to step 2, where the decision trees will be introduced. We will first calculate the so-called pseudo residuals with the general formula:\n\n(6)", null, "This looks like a nasty formula, but it is actually something we already calculated. We take the derivative of the loss function with respect to the predicted value and fill in the predicted value of the previous iteration, which is", null, ". Note that this is only the case for a loss function that is the MSE. Furthermore, we have that", null, "for", null, ", which is the initial predicted value. Thus,", null, "is simply equal to", null, ", as we had a minus sign in front of the derivative in the equation for", null, ". Now we have found the residuals, we must fit a regression tree to the", null, "values and create terminal regions", null, ", for", null, ", where", null, "is the number of leaves the tree has. This is illustrated in the figure below. We see that for the small data set, the residuals for each observation have been calculated and that a regression tree is constructed based on these residuals. The terminal regions", null, "are simply labels that we put on the leaves to keep track of them during each stage of the algorithm.", null, "", null, "Note that in real practice decision trees usually have more leaves and sometimes even multiple decision trees are used, but this is simply to illustrate how the process works. In the next part we calculate the output values for each leaf. This is done by doing something similar as in step 1:\n\n(7)", null, "The difference from step 1 is that we now include the output of the previous step whereas in step 1 there was no previous step. Furthermore, we do not take the summation over all samples, only the summation over all samples that belong to a specific leaf in the regression tree. Note, given our choice of loss function, the output values will always be the average of the residuals that end up in the same leaf. So taking the two leaves in the figure above we have", null, "and", null, "At last, we simply make a new prediction for each sample using:\n\n(8)", null, "where", null, "is the so-called learning rate and ranges from 0 to 1. Usually, this learning rate is set to a low number to increase the accuracy of the model in the long run. According to Leslie N. Smith from the US Naval Research Laboratory, you could estimate a good learning rate by training the model initially with a very low learning rate and increasing it either linearly or exponentially at each iteration, but this is beyond the scope of this article.\n\nAll of this is iterated until stage", null, "of the algorithm. At the last stage, we should get an output value of", null, "that has a very low mean squared error and a very accurate prediction for each observation. This is a very detailed description of how gradient boosting works.\n\n### XGBoost\n\neXtreme Gradient Boosting takes gradient boosting even further. It became well known in the machine learning competition circles after its usage in the winning solution of the Higgs Machine Learning Challenge. What makes XGBoost different is that it takes into account a regularization term to avoid overfitting of the model. On the top right of the figure below we see an example of overfitting a model. The best fit is shown in the lower right corner.", null, "The effect of the regularization term visualized\n\nSo in principle, XGBoost is simply gradient boosting but the loss function looks a bit different as it takes into account the possible overfitting unextreme gradient boosting is known to be prone to. It consists of a regularization term that takes into account the output value of the decision trees. This may look like something as follows:\n\n(9)", null, "where", null, "is the prediction of observation", null, "and", null, "can be seen as the score on the", null, "-th leaf. Furthermore,", null, "is the number of leaves in the decision tree, which the user can set. The last term is the regularization part of our new equation. Now we want to minimize this objective function with respect to the scores of each leaf. I will not show how this is calculated as it is a complicated proof, but for the interested reader you can find it here. From this proof we actually get that the optimal value is:\n\n(10)", null, "where", null, "is the first derivative of the loss function with respect to", null, "and", null, "is the second derivative of the loss function with respect to", null, ". Note that", null, "is a scaling factor that can be set by the user who wants to run the algorithm. The larger the value of", null, ", the more emphasis we give on the regularization penalty, which will lead the optimal output value to get closer to 0. This is exactly what regularization is supposed to do.\n\nNoticing that in the formula for", null, ",", null, "is simply the negative residual for observation", null, "as we have that our loss function is the MSE. One can show this by simply taking the derivative of", null, "with respect to", null, ", which is equal to", null, ". This expression is simply the negative residual for the", null, "-th observation. Therefore, for", null, "we take the derivative of", null, "with respect to", null, ", which equals to", null, ". Hence, we can rewrite", null, "as follows:\n\n(11)", null, "This is often called the similarity score and XGBoost uses it to calculate the optimal output value for each leaf in the decision trees. Notice that this is almost exactly the same formula for the output value,", null, ", used in unexetreme gradient boosting. The only difference here is that we have a regularization paramater", null, "in our denominator.\n\n### OLS vs XGBoost\n\nOLS has proved itself to be very useful in finding linear connections in your data. However, with XGBoost this can be done with much more precision, especially when your data set is complex. Nonetheless, the real downside of XGBoost is that it is very prone to overfitting the data. We saw that we can usually solve this by penalizing the regularization term more, but it can still have this issue. In addition, these kinds of boosting models suffer from high estimation variance compared to the linear regression with OLS. This could be due to the fact that we have a rather complex parameter space, whereas OLS has a very simple parameter space.\n\nIn addition, OLS is usually more intuitive which makes it a lot more user friendly. Furthemore, OLS gives us an idea of how we should model our data and is very interpretable. So, XGBoost will generally fit training data much better than linear regression, but it is also prone to overfitting and is less easily interpreted. However, boosting algorithms are far more superior to OLS when it comes to high dimensional data, as these algorithms always pick the most relevant variables. Thus, either one may end up being better, depending on your data and your needs. Therefore, you should always check whether the complicated model is actually needed in your case.\n\n### References\n\nChen, T., & Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. PDF\n\nFriedman, J. H. (2002). Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4), 367–378. https://doi.org/10.1016/s0167-9473(01)00065-2\n\nL. N. Smith, “Cyclical Learning Rates for Training Neural Networks,” 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, 2017, pp. 464-472, doi: 10.1109/WACV.2017.58. Link", null, "" ]
[ null, "https://www.deeconometrist.nl/wp-content/uploads/2021/02/Uitgelichte-picca-e1614008302598-1-scaled.jpg", null, "http://deeconometrist.nl/wp-content/uploads/2021/02/example-of-a-decision-tree.png", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-13906107fbc408bd1f653d47fe37c885_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-8908ce41753c28eb8d8e9ce3ecce2a7a_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-62e7de5e9eba7161915ef7826e1d1892_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-8d9eb921d8e276e49f655b365e7ed470_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-0a9d41e21a54364caa3506501c063bdc_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-1228dfee18acad08cf32bb534e2487c1_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-96f7a4bd8fd7e867c476abcf9a9bf6a1_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-bc8ab2caba573ef7a700d98c7447e273_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-b120321aae50b285b0745cb0e3668601_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-2f2701000f0f4178b4310f6c57bdcb17_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-4022ca7cff3cb0762a406f380b97d590_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-85cae4a6b2cd75277bb04f0b330d17c3_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-64d64de6ce455f1eec8bd32b08b6e4b7_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-ee1b841f8f8a4bde6673e3166a57ab42_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-2f2701000f0f4178b4310f6c57bdcb17_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-3640def2f41e35dd2212f621d4b4ce56_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-3ae2a7cbf067ffd1c0bff3475adce40c_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-751c26e2288cd7da9871f7e7065dc4b2_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-d1c51f7c2d82cc30ece1d7de67b43bda_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-c980bd168d2f0f81c79dc1221237f425_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-c980bd168d2f0f81c79dc1221237f425_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-0651af9eed723fdb729941a1982e2dde_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-f8acff1b5641cf6030cb5a9ed2a2dbb1_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-085e5583a18ba45cc72bf81a01a4f2f9_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-b5e35e36831f15516ade82d5eb46d027_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-c96166b15dc32eb9b9159f0c2a8c1e8c_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-ea281a170b60c7fa71d87faee9c25ecd_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-eaac3dae787865ffba74144dec542615_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-ea281a170b60c7fa71d87faee9c25ecd_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-ea281a170b60c7fa71d87faee9c25ecd_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-f299b31afbb2914324ca88d293941237_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-e4ecbf5b57f76cf033dd7cbfd32b5afa_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-ead042b49a19dc8cabeabafe594f4ce8_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-f299b31afbb2914324ca88d293941237_l3.svg", null, "http://deeconometrist.nl/wp-content/uploads/2021/02/small-dataset.jpg", null, "http://deeconometrist.nl/wp-content/uploads/2021/02/Regression-tree-on-residuals.jpg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-45915f0cf8aada9bca2f799e8d80a327_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-67643663f881842ae60dc5c6cdbcaa43_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-be838f4da4cf3a9b3ba6314f3b9b423e_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-02e0a3d2787e5c0e34c437fd39dcf05b_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-889cc463659ac788060c525b4d10655e_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-96f7a4bd8fd7e867c476abcf9a9bf6a1_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-7e7939963df76e36ec86b8a65ad23a89_l3.svg", null, "http://deeconometrist.nl/wp-content/uploads/2021/02/overfiiting.jpg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-b946395f6a504d4988bceac4c984871c_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-a54cb0d4cd397959702ae0a6802d8228_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-99f73c086a37111d4bd8f7274491b383_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-08a6395fe52e60121c766edb0bf99771_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-e14c9a53ce972688bf0ddd953861c89f_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-0b44089ea664d0a15f7ed44a97bef696_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-b31f8d4fb24365f4d9418b386ae8de0b_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-295723b4d98f2e2bb5ce3a7097927bd6_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-a54cb0d4cd397959702ae0a6802d8228_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-67d1e1c9440ae80e913095f38feea7fd_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-a54cb0d4cd397959702ae0a6802d8228_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-63ef9598faa378a35168b53b33a3103b_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-63ef9598faa378a35168b53b33a3103b_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-1a1db4521585b350e2d38b0414d75250_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-295723b4d98f2e2bb5ce3a7097927bd6_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-99f73c086a37111d4bd8f7274491b383_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-60bbd97b60809e30db64bc30dce649f6_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-a54cb0d4cd397959702ae0a6802d8228_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-9821f370332d25843ad5ec8bb517544c_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-99f73c086a37111d4bd8f7274491b383_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-67d1e1c9440ae80e913095f38feea7fd_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-9821f370332d25843ad5ec8bb517544c_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-a54cb0d4cd397959702ae0a6802d8228_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-21f8ae731ef791f82f5eb48ebe88cee9_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-1a1db4521585b350e2d38b0414d75250_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-8ffb50359b5ab2d85f2680d2d583aaea_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-cc24f27d3b621ed360220b723063f3b5_l3.svg", null, "https://www.deeconometrist.nl/wp-content/ql-cache/quicklatex.com-63ef9598faa378a35168b53b33a3103b_l3.svg", null, "http://deeconometrist.nl/wp-content/uploads/2019/10/De-Econometrist-Sam.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.944402,"math_prob":0.97141135,"size":11565,"snap":"2023-40-2023-50","text_gpt3_token_len":2399,"char_repetition_ratio":0.13657989,"word_repetition_ratio":0.019412642,"special_character_ratio":0.20683095,"punctuation_ratio":0.098478064,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9950002,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150],"im_url_duplicate_count":[null,2,null,3,null,1,null,1,null,1,null,1,null,1,null,null,null,2,null,null,null,1,null,2,null,1,null,1,null,2,null,1,null,2,null,1,null,2,null,1,null,1,null,5,null,5,null,1,null,1,null,1,null,1,null,3,null,3,null,1,null,3,null,3,null,2,null,1,null,1,null,2,null,4,null,4,null,1,null,1,null,1,null,1,null,1,null,2,null,1,null,3,null,1,null,5,null,null,null,1,null,null,null,3,null,1,null,2,null,5,null,5,null,5,null,4,null,4,null,2,null,2,null,null,null,1,null,5,null,2,null,null,null,5,null,2,null,5,null,1,null,2,null,1,null,1,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T10:46:50Z\",\"WARC-Record-ID\":\"<urn:uuid:bb2a26f6-5039-4fde-a4ae-c9bad3191ab7>\",\"Content-Length\":\"318989\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c02217d7-6f86-4a87-859d-c8968581e04d>\",\"WARC-Concurrent-To\":\"<urn:uuid:2edc453d-4528-45cd-a86f-10856f3296b6>\",\"WARC-IP-Address\":\"141.138.168.127\",\"WARC-Target-URI\":\"https://www.deeconometrist.nl/econometrics/is-ols-a-thing-of-the-past/\",\"WARC-Payload-Digest\":\"sha1:OAZ2NC3WFXDTEKUKS72BJPUTSX5OS2E3\",\"WARC-Block-Digest\":\"sha1:IALMHCQ2M3UUV7YUI4FM42PLU7WLD4GZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511075.63_warc_CC-MAIN-20231003092549-20231003122549-00439.warc.gz\"}"}
https://www.rhumbarlv.com/how-many-ml-are-in-a-grain/
[ "## How many mL are in a grain?\n\nGrains to milliliters table\n\nGrains Milliliters\n1 gr 0.064800414722654 ml\n2 gr 0.12960082944531 ml\n3 gr 0.19440124416796 ml\n4 gr 0.25920165889062 ml\n\n## What is 4 mL to milligrams?\n\nHow Many Milligrams are in a Milliliter?\n\nVolume in Milliliters: Weight in Milligrams of:\nWater Granulated Sugar\n3 ml 3,000 mg 2,100 mg\n4 ml 4,000 mg 2,800 mg\n5 ml 5,000 mg 3,500 mg\n\n## How many mg is equal to 1 mL?\n\n1,000 milligrams\nSo, a milligram is a thousandth of a thousandth of a kilogram, and a milliliter is a thousandth of a liter. Notice there is an extra thousandth on the weight unit. Therefore, there must be 1,000 milligrams in a milliliter, making the formula for mg to ml conversion: mL = mg / 1000 .\n\n## What is 1 grain equal to in MG?\n\nGrains to Milligrams table\n\nGrains Milligrams\n1 gr 64.80 mg\n2 gr 129.60 mg\n3 gr 194.40 mg\n4 gr 259.20 mg\n\n## Is grain bigger than MG?\n\nA grain is an obsolescent unit of measurement of mass, and in the troy weight, avoirdupois, and Apothecaries’ system, equal to exactly 64.79891 milligrams. It is nominally based upon the mass of a single ideal seed of a cereal.\n\n## How much is a measure of grain?\n\nGrain, unit of weight equal to 0.065 gram, or 1/7,000 pound avoirdupois. One of the earliest units of common measure and the smallest, it is a uniform unit in the avoirdupois, apothecaries’, and troy systems.\n\n## How much is 1 mL in a syringe?\n\nThese are just different names for the same amount of volume. In other words, one milliliter (1 ml) is equal to one cubic centimeter (1 cc). This is a three-tenths milliliter syringe. It may be called a “0.3 ml” syringe or “0.3 cc” syringe.\n\n## Is 1ml the same as 1mg?\n\nOne milliliter (British spelling: millilitre) (ml) is 1/1000 of a liter and is a unit of volume. 1 milligram (mg) is 1/1000 of a gram and is a unit of mass/weight. This means that we require an extra piece of information in order to be able to convert the measurement across.\n\n## How much is a grain in medical terms?\n\n(1) An obsolete, non-SI (International System) unit of weight formerly used by pharmacists, equal to 0.0648 g. (2) A nonspecific term for any granule particle (e.g., a psammoma body), seen by light microscopy; the term is no longer used in pathology.\n\n## How many milligrams are in 1 / 4 grain?\n\nTo contact us, please click here. How many milligrams are in 1/4 grain? A quarter grain equals 16.2 milligrams because 1/4 times 64.8 (the conversion factor) = 16.2 How do you convert 0.25 grain into milligrams?\n\n## How many milligrams are in 0.25 grain?\n\n0.25 grain is the same as 16.199725 milligrams. To calculate a value in grains to the corresponding value in milligrams, just multiply the quantity in grains by 64.79891 (the conversion factor). Supose you want to convert a grain into milligrams.\n\n## How many milligrams are equal to one ml?\n\nHow Many MG Is Equal to * ML. To check How Many MG Is Equal to * ML just replace * with mg unit and check ml unit. 20 mg to ml = 0.02 ml. 500 mg to ml = 0.5 ml. Convert 100 milligrams to milliliters. 100 Mg is = 0.1 ml. Convert 10 MG to ML 0.01 ml. 1 MG Is How Many ML 0.001 ML. 1 ML to MG 1000 mg\n\n## How to convert a grain to a milliliter?\n\nEasily convert Grains (gr) to Milliliter (mL) using this free online unit conversion calculator. Easily convert Grains (gr) to Milliliter (mL) using this free online unit conversion calculator. Simple online unit conversion tool to convert grains (gr) into milliliter (mL)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8303698,"math_prob":0.9713761,"size":3481,"snap":"2021-43-2021-49","text_gpt3_token_len":1039,"char_repetition_ratio":0.1622088,"word_repetition_ratio":0.0581761,"special_character_ratio":0.31169203,"punctuation_ratio":0.14048532,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9814265,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-30T17:48:24Z\",\"WARC-Record-ID\":\"<urn:uuid:7efcce6f-78c0-4f14-99bb-142f39d95e69>\",\"Content-Length\":\"55361\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1384e399-84a8-4335-adb4-c866de330f6e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7764a23c-9f86-4bc4-8b25-bc8c8f815efd>\",\"WARC-IP-Address\":\"172.67.220.23\",\"WARC-Target-URI\":\"https://www.rhumbarlv.com/how-many-ml-are-in-a-grain/\",\"WARC-Payload-Digest\":\"sha1:MQMFXZQYJC6232IABTRKBYDQZ6RBSKZV\",\"WARC-Block-Digest\":\"sha1:SXYMBPYQ62YCBAUD5GBHBCN2OK66QLUE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359065.88_warc_CC-MAIN-20211130171559-20211130201559-00414.warc.gz\"}"}
https://engcourses-uofa.ca/books/numericalanalysis/curve-fitting/linearization-of-nonlinear-relationships/
[ "#", null, "## Curve Fitting: Linearization of Nonlinear Relationships\n\n### Linearization of Nonlinear Relationships\n\nIn the previous two sections, the model function", null, "was formed as a linear combination of functions", null, "and the minimization of the sum of the squares of the differences between the model prediction and the data produced a linear system of equations to solve for the coefficients in the model. In that case", null, "was linear in the coefficients. In certain situations, it is possible to convert nonlinear relationships to a linear form similar to the previous methods. For example, consider the following models", null, ",", null, ", and", null, ":", null, "", null, "is an exponential model,", null, "is a power model, while", null, "is a logarithmic model. These models are nonlinear in", null, "and the unknown coefficients. However, by taking the natural logarithm of the first two, they can easily be transformed into linear models as follows:", null, "In the first model, the data can be converted to", null, "and linear regression can be used to find the coefficients", null, "and", null, ". For the second model, the data can be converted to", null, "and linear regression can be used to find the coefficients", null, ", and", null, ". The third model can be considered linear after converting the data into the form", null, ".\n\n#### Coefficient of Determination for Nonlinear Relationships\n\nFor nonlinear relationships, the coefficient of determination is not a very good measure for how well the data fit the model. See for example this article on the subject. In fact, different software will give different values for", null, ". We will use the coefficient of determination for nonlinear relationships defined as:", null, "which is equal to 1 minus the ratio between the model sum of squares and the total sum of squares of the data. This is consistent with the definition of", null, "used in Mathematica for nonlinear models.\n\n#### Example 1\n\nFit an exponential model to the data: (1,1.93),(1.1,1.61),(1.2,2.27),(1.3,3.19),(1.4,3.19),(1.5,3.71),(1.6,4.29),(1.7,4.95),(1.8,6.07),(1.9,7.48),(2,8.72),(2.1,9.34),(2.2,11.62).\n\n##### Solution\n\nThe exponential model has the form:", null, "This form can be linearized as follows:", null, "The data needs to be converted to", null, ".", null, "will be used to designate", null, ". The following Microsoft Excel table shows the raw data, and after conversion to", null, ".", null, "The linear regression described above will be used to find the best fit for the model:", null, "with", null, "The following Microsoft Excel table is used to calculate the various entries in the above equation:", null, "Therefore:", null, "These can be used to calculate the coefficients in the original model:", null, "Therefore, the best exponential model based on the least squares of the linearized version has the form:", null, "The following Microsoft Excel chart shows the calculated trendline in Excel with the same coefficients:", null, "It is possible to calculate the coefficient of determination for the linearized version of this model, however, it would only describe how good the linearized model is. For the nonlinear model, we will use the coefficient of determination as described above which requires the following Microsoft Excel table:", null, "In this case, the coefficient of determination can be calculated as:", null, "The NonlinearModelFit built-in function in Mathematica can be used to generate the model and calculate its", null, "as shown in the code below.\n\nView Mathematica Code\nData = {{1, 1.93}, {1.1, 1.61}, {1.2, 2.27}, {1.3, 3.19}, {1.4, 3.19}, {1.5, 3.71}, {1.6, 4.29}, {1.7, 4.95}, {1.8, 6.07}, {1.9, 7.48}, {2, 8.72}, {2.1, 9.34}, {2.2, 11.62}};\nmodel = NonlinearModelFit[Data, b1*E^(a1*x), {a1, b1}, x]\ny = Normal[model]\nR2 = model[\"RSquared\"]\nPlot[y, {x, 1, 2.2}, Epilog -> {PointSize[Large], Point[Data]}, PlotLegends -> {\"Model\"}, AxesLabel -> {\"x\", \"y\"}, AxesOrigin -> {0, 0} ]\n\nView Python Code\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import curve_fit\n\nData = [[1, 1.93], [1.1, 1.61], [1.2, 2.27], [1.3, 3.19], [1.4, 3.19], [1.5, 3.71], [1.6, 4.29], [1.7, 4.95], [1.8, 6.07], [1.9, 7.48], [2, 8.72], [2.1, 9.34], [2.2, 11.62]]\ndef f(x, a, b): return a*np.exp(b*x)\ncoeff, covariance = curve_fit(f, [point for point in Data],\n[point for point in Data])\nprint(\"coeff: \",coeff)\nx_val = np.arange(1,2.2,0.01)\nplt.title('%.5fe**(%.5fx)' % tuple(coeff))\nplt.plot(x_val, f(x_val, coeff, coeff))\nplt.scatter([point for point in Data], [point for point in Data], c='k')\nplt.xlabel(\"x\"); plt.ylabel(\"y\")\nplt.grid(); plt.show()\n\n# R squared\nx = np.array([point for point in Data])\ny = np.array([point for point in Data])\ny_fit = f(x, coeff, coeff)\nss_res = np.sum((y - y_fit)**2)\nss_tot = np.sum((y - np.mean(y))**2)\nr2 = 1 - (ss_res / ss_tot)\nprint(\"R Squared: \",r2)\n\n\nThe following link provides the MATLAB codes for implementing the Linearization of nonlinear exponential model.\n\nMATLAB file:\n\n#### Example 2\n\nFit a power model to the data: (1,1.93),(1.1,1.61),(1.2,2.27),(1.3,3.19),(1.4,3.19),(1.5,3.71),(1.6,4.29),(1.7,4.95),(1.8,6.07),(1.9,7.48),(2,8.72),(2.1,9.34),(2.2,11.62).\n\n##### Solution\n\nThe power model has the form:", null, "This form can be linearized as follows:", null, "The data needs to be converted to", null, ".", null, "and", null, "will be used to designate", null, "and", null, "respectively. The following Microsoft Excel table shows the raw data, and after conversion to", null, ".", null, "The linear regression described above will be used to find the best fit for the model:", null, "with", null, "The following Microsoft Excel table is used to calculate the various entries in the above equation:", null, "Therefore:", null, "These can be used to calculate the coefficients in the original model:", null, "Therefore, the best power model based on the least squares of the linearized version has the form:", null, "The following Microsoft Excel chart shows the calculated trendline in Excel with the same coefficients:", null, "It is possible to calculate the coefficient of determination for the linearized version of this model, however, it would only describe how good the linearized model is. For the nonlinear model, we will use the coefficient of determination as described above which requires the following Microsoft Excel table:", null, "In this case, the coefficient of determination can be calculated as:", null, "The NonlinearModelFit built-in function in Mathematica can be used to generate a slightly better model with a higher", null, ". The following is the corresponding Mathematica output.", null, "The Mathematica code is shown below.\n\nView Mathematica Code\nData = {{1, 1.93}, {1.1, 1.61}, {1.2, 2.27}, {1.3, 3.19}, {1.4, 3.19}, {1.5, 3.71}, {1.6, 4.29}, {1.7, 4.95}, {1.8, 6.07}, {1.9, 7.48}, {2, 8.72}, {2.1, 9.34}, {2.2, 11.62}};\nmodel = NonlinearModelFit[Data, b1*x^(a1), {a1, b1}, x]\ny = Normal[model]\nR2 = model[\"RSquared\"]\nPlot[y, {x, 1, 2.2}, Epilog -> {PointSize[Large], Point[Data]}, PlotLegends -> {\"Model\"}, AxesLabel -> {\"x\", \"y\"}, AxesOrigin -> {0, 0} ]\n\nView Python Code\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import curve_fit\n\nData = [[1, 1.93], [1.1, 1.61], [1.2, 2.27], [1.3, 3.19], [1.4, 3.19], [1.5, 3.71], [1.6, 4.29], [1.7, 4.95], [1.8, 6.07], [1.9, 7.48], [2, 8.72], [2.1, 9.34], [2.2, 11.62]]\ndef f(x, a, b): return a*x**b\ncoeff, covariance = curve_fit(f, [point for point in Data],\n[point for point in Data])\nprint(\"coeff: \",coeff)\nx_val = np.arange(1,2.2,0.01)\nplt.title('%.5fx**(%.5f)' % tuple(coeff))\nplt.plot(x_val, f(x_val, coeff, coeff))\nplt.scatter([point for point in Data], [point for point in Data], c='k')\nplt.xlabel(\"x\"); plt.ylabel(\"y\")\nplt.grid(); plt.show()\n\n# R squared\nx = np.array([point for point in Data])\ny = np.array([point for point in Data])\ny_fit = f(x, coeff, coeff)\nss_res = np.sum((y - y_fit)**2)\nss_tot = np.sum((y - np.mean(y))**2)\nr2 = 1 - (ss_res / ss_tot)\nprint(\"R Squared: \",r2)\n\n\nThe following link provides the MATLAB codes for implementing the Linearization of nonlinear power model.\n\nMATLAB files:" ]
[ null, "https://engcourses-uofa.ca/wp-content/themes/samer_custom_theme/img/Faculty_Wordmark_Standard.jpg", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-2bbb992535337282f396724234c55353_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-4c514a97130cbc0ecc8a5308b966acc9_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-2bbb992535337282f396724234c55353_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-30dabb4a806593c34011456f6e054d4b_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-60e464263c893f76b82f209f00f8e1ed_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-351aca32a93bea9e4a541b3b058dfc84_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-2f3527ee2a51cce31563f0462db17131_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-30dabb4a806593c34011456f6e054d4b_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-60e464263c893f76b82f209f00f8e1ed_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-351aca32a93bea9e4a541b3b058dfc84_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-822065bc13c102457e5826bb62632b02_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-58b3803138142c7fcfa0d599e3057b91_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-5d9bb8950c55a5de0a10b6ca84fa4c61_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-9f73e7cb82563c203450014581693768_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-b7a60c9061f3f04f9bbc6ee88a863b4f_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-d99f83f5dee708b28653e8f11f2f9b62_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-8d7387e3baab5217cd064dd9a0b48074_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-a50b2726af07e19d81ea966554f40006_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-86a446fbb1da20a78d29863e002aea7a_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-7b07321802b8f2905f8d7e42bd3fb7d1_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-3d8f69fa771439ac80e02380328adf23_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-7b07321802b8f2905f8d7e42bd3fb7d1_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-718d87e6ccd45138ba6d16cf988a7a1b_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-71e58a9f5ea70b6d8c24d6605549aaa2_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-5d9bb8950c55a5de0a10b6ca84fa4c61_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-2d64e0ffe952b04b0e6010ee537a9178_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-95685e464eb74d91847aea9266ff9a33_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-e1f3b6e1a0281b63f73571dc63d60c2e_l3.png", null, "https://engcourses-uofa.ca/wp-content/uploads/example-11.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-8c7055d378d07ffaf9898e4917fa5ac5_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-4454844df4d26460ac458e3771463de2_l3.png", null, "https://engcourses-uofa.ca/wp-content/uploads/betternumbers1.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-8becaa122aea6f02438419d4931f700c_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-722e7a0808f528df5375fbaed2220b5f_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-63e7a69beee82918a29a4c93a0f8440b_l3.png", null, "https://engcourses-uofa.ca/wp-content/uploads/example-23.png", null, "https://engcourses-uofa.ca/wp-content/uploads/Example25.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-b5a6fc36966fc503236190d94216b2fc_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-7b07321802b8f2905f8d7e42bd3fb7d1_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-7ab756d4b1bf85d8ea0801d1835ee6b9_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-85b9fa1a37831d7d58c006314e711c7e_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-d99f83f5dee708b28653e8f11f2f9b62_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-2d64e0ffe952b04b0e6010ee537a9178_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-876be1f007f61402c4632f0b1eb86011_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-95685e464eb74d91847aea9266ff9a33_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-988a976eced0c732042a5805a6559937_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-42a079abaed36184b9cae8b418b5ef7a_l3.png", null, "https://engcourses-uofa.ca/wp-content/uploads/Example-b1.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-279557fd310d83ce4a432828c2a7f7d3_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-8d1ddaba26cba0cdea7f406f7c98a6ad_l3.png", null, "https://engcourses-uofa.ca/wp-content/uploads/betternumbers2.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-3890aae29708ed93b655b772b816e4b3_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-ff5ca93cf8e6339c76d11f2c694f06ae_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-40dd4fe320b44783a18be1f1d8ef649c_l3.png", null, "https://engcourses-uofa.ca/wp-content/uploads/power1.png", null, "https://engcourses-uofa.ca/wp-content/uploads/power2.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-0df28b96159293ab27023e6fd242c6ed_l3.png", null, "https://engcourses-uofa.ca/wp-content/ql-cache/quicklatex.com-7b07321802b8f2905f8d7e42bd3fb7d1_l3.png", null, "https://engcourses-uofa.ca/wp-content/uploads/Power3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.820954,"math_prob":0.9981094,"size":7567,"snap":"2022-27-2022-33","text_gpt3_token_len":2271,"char_repetition_ratio":0.1371149,"word_repetition_ratio":0.6581498,"special_character_ratio":0.34399366,"punctuation_ratio":0.25765577,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999915,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120],"im_url_duplicate_count":[null,null,null,null,null,3,null,null,null,6,null,6,null,6,null,3,null,6,null,6,null,6,null,null,null,3,null,6,null,null,null,3,null,6,null,null,null,3,null,3,null,null,null,3,null,null,null,3,null,3,null,6,null,null,null,6,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null,3,null,3,null,6,null,null,null,3,null,6,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T11:37:38Z\",\"WARC-Record-ID\":\"<urn:uuid:dbadd42d-2c3c-4177-b516-3ab24c2c3fe5>\",\"Content-Length\":\"91215\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7c6dd2f-7cd3-4a54-b126-a4a27efae3b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:7c034652-f86a-4a6a-9b8f-dd0847e9fd28>\",\"WARC-IP-Address\":\"159.65.99.127\",\"WARC-Target-URI\":\"https://engcourses-uofa.ca/books/numericalanalysis/curve-fitting/linearization-of-nonlinear-relationships/\",\"WARC-Payload-Digest\":\"sha1:CJN5SO5F5JQIH5WQ55YAPBTPWSQFUMLY\",\"WARC-Block-Digest\":\"sha1:INSV7LPSLCAD54NWDHTN6KK2IFYCM4BK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104375714.75_warc_CC-MAIN-20220704111005-20220704141005-00718.warc.gz\"}"}
https://www.i-lighter.com/home.html
[ "Posted on\n\nWe shall have much more to say about propositional logic in Chapter 12. Exam As another example, suppose we are faced with the problem of scheduling final scheduling examinations for courses. That is, we must assign course exams to time slots so that two courses may have their exams scheduled in the same time slot only if there is no student taking both. At first, it may not be apparent how we should model this problem. One approach is to draw a circle called a node for each course and draw a line called an edge connecting two nodes if the corresponding courses have a student in common.  Given the course-conflict graph, we can solve the exam-scheduling problem by repeatedly finding and removing “maximal independent sets” from the graph.", null, "An independent set is a collection of nodes that have no connecting edges within the Maximal collection. An independent set is maximal if no other node from the graph can be independent set added without including an edge between two nodes of the set. In terms of courses, a maximal independent set is any maximal set of courses with no common students. The set of courses corresponding to the selected maximal independent set is assigned to the first time slot.\n\nWe remove from the graph the nodes in the first maximal independent set, along with all incident edges, and then find a maximal independent set among the remaining courses. One choice for the next maximal independent set is the singleton set {CS}.\n\n𐌢" ]
[ null, "http://www.i-lighter.com/wp-content/uploads/2021/01/work-731198_1920.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9404521,"math_prob":0.9394099,"size":1456,"snap":"2023-40-2023-50","text_gpt3_token_len":281,"char_repetition_ratio":0.18044077,"word_repetition_ratio":0.0,"special_character_ratio":0.19024725,"punctuation_ratio":0.06985294,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9776784,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T16:38:56Z\",\"WARC-Record-ID\":\"<urn:uuid:14b70574-ceb0-49ce-895a-15367981b07e>\",\"Content-Length\":\"33087\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:981a5507-4655-4e73-9a35-12d6e5ea2610>\",\"WARC-Concurrent-To\":\"<urn:uuid:a0e5d16a-c79d-4a6b-8358-de8a518e8912>\",\"WARC-IP-Address\":\"172.245.177.243\",\"WARC-Target-URI\":\"https://www.i-lighter.com/home.html\",\"WARC-Payload-Digest\":\"sha1:4GMS2WVHYKPORVONZY3XDYKRKVDOKEYF\",\"WARC-Block-Digest\":\"sha1:FEDAV4BPJOSK2IJAGM7SJNLMJE5I4AN6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233509023.57_warc_CC-MAIN-20230925151539-20230925181539-00018.warc.gz\"}"}
https://help.imsl.com/fortran/2021.0/html/fnlmath/FNLMath/mch11.13.01.html
[ "FNLMath : Utilities\nUtilities\nRoutines\nScaLAPACK Utilities\nSets up a processor grid ScaLAPACK_SETUP\nCalculates array dimensions for local arrays ScaLAPACK_GETDIM\nWrites the matrix data to a file ScaLAPACK_WRITE\nReads matrix data from an array ScaLAPACK_MAP\nWrites the matrix data to a global array ScaLAPACK_UNMAP\nExits ScaLAPACK usage ScaLAPACK_EXIT\nPrint\nPrints error messages ERROR_POST\nPrints rank-1 or rank-2 arrays of numbers SHOW\nReal rectangular matrix with integer row and column labels WRRRN\nReal rectangular matrix with given format and labels WRRRL\nInteger rectangular matrix with integer row and column labels WRIRN\nInteger rectangular matrix with given format and labels WRIRL\nComplex rectangular matrix with row and column labels WRCRN\nComplex rectangular matrix with given format and labels WRCRL\nSets or retrieves options for printing a matrix WROPT\nSets or retrieves page width and length PGOPT\nPermute\nElements of a vector PERMU\nRows/columns of a matrix PERMA\nSort\nSorts a rank-1 array of real numbers x so the y results are algebraically\nnondecreasing, y1 y2 yn SORT_REAL\nReal vector by algebraic value SVRGN\nReal vector by algebraic value and permutations returned SVRGP\nInteger vector by algebraic value SVIGN\nInteger vector by algebraic value and permutations returned SVIGP\nReal vector by absolute value SVRBN\nReal vector by absolute value and permutations returned SVRBP\nInteger vector by absolute value SVIBN\nInteger vector by absolute value and permutations returned SVIBP\nSearch\nSorted real vector for a number SRCH\nSorted integer vector for a number ISRCH\nSorted character vector for a string SSRCH\nCharacter String Manipulation\nGets the character corresponding to a given ASCII value ACHAR\nGet the integer ASCII value for a given character IACHAR\nGets upper case integer ASCII value for a character ICASE\nCase-insensitive version comparing two strings IICSR\nCase-insensitive version of intrinsic function IIDEX\nConverts a character string with digits to an integer CVTSI\nTime, Date, and Version\nCPU time CPSEC\nTime of day TIMDY\nToday’s date TDATE\nNumber of days from January 1, 1900, to the given date NDAYS\nDate for the number of days from January 1, 1900 NDYIN\nDay of week for given date IDYWK\nVersion and system information VERML\nRandom Number Generation\nGenerates a rank-1 array of random numbers RAND_GEN\nRetrieves the current value of the seed RNGET\nInitializes a random seed RNSET\nSelects the uniform (0,1) generator RNOPT\nInitializes the 32-bit Mersenne Twister generator using an array RNIN32\nRetrieves the current table used in the 32-bit Mersenne Twister generator RNGE32\nSets the current table used in the 32-bit Mersenne Twister generator RNSE32\nInitializes the 32-bit Mersenne Twister generator using an array RNIN64\nRetrieves the current table used in the 64-bit Mersenne Twister generator IIDEX\nSets the current table used in the 64-bit Mersenne Twister generator RNSE64\nGenerates pseudorandom numbers (function form) RNUNF\nGenerates pseudorandom numbers RNUN\nLow Discrepancy Sequences\nShuffled Faure sequence initialization FAURE_INIT\nFrees the structure containing information about\nthe Faure sequence FAURE_FREE\nComputes a shuffled Faure sequence FAURE_NEXT\nOptions Manager\nGets and puts type INTEGER options IUMAG\nGets and puts type REAL options UMAG\nGets and puts type DOUBLE PRECISION options DUMAG\nLine Printer Graphics\nPrints plot of up to 10 sets of points PLOTP\nMiscellaneous\nDecomposes an integer into its prime factors PRIME\nReturns mathematical and physical constants CONST\nConverts a quantity to different units CUNIT\nComputes square root of a2 + b2 without underflow or overflow HYPOT\nInitializes or finalizes MPI. MP_SETUP" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63589275,"math_prob":0.93605536,"size":3158,"snap":"2023-40-2023-50","text_gpt3_token_len":761,"char_repetition_ratio":0.12460368,"word_repetition_ratio":0.16322315,"special_character_ratio":0.17796074,"punctuation_ratio":0.013861386,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9826612,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T09:53:43Z\",\"WARC-Record-ID\":\"<urn:uuid:eb28d6d8-2609-466e-a24c-aed1b91c19b9>\",\"Content-Length\":\"20055\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7d64dfd-4319-41d7-8d5c-b2e5499a5715>\",\"WARC-Concurrent-To\":\"<urn:uuid:9261e864-41a4-448b-b852-a4e75ced338d>\",\"WARC-IP-Address\":\"20.98.105.60\",\"WARC-Target-URI\":\"https://help.imsl.com/fortran/2021.0/html/fnlmath/FNLMath/mch11.13.01.html\",\"WARC-Payload-Digest\":\"sha1:DKGUG4UZU4SCJNQTE47W6BOE77FTEF22\",\"WARC-Block-Digest\":\"sha1:LCF6QI5IZ6Z3SFYLOD4QDBHJ3BUUK2ME\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510671.0_warc_CC-MAIN-20230930082033-20230930112033-00671.warc.gz\"}"}
https://studysoup.com/tsg/14454/introductory-chemistry-5-edition-chapter-14-problem-74p
[ "×\nGet Full Access to Introductory Chemistry - 5 Edition - Chapter 14 - Problem 74p\nGet Full Access to Introductory Chemistry - 5 Edition - Chapter 14 - Problem 74p\n\n×\n\n# Calculate the pH of each solution.(a) [OH?] = 2.8 × 10?11 M(b) [OH?] = 9.6 × 10?3 M(c)", null, "ISBN: 9780321910295 34\n\n## Solution for problem 74P Chapter 14\n\nIntroductory Chemistry | 5th Edition\n\n• Textbook Solutions\n• 2901 Step-by-step solutions solved by professors and subject experts\n• Get 24/7 help from StudySoup virtual teaching assistants", null, "Introductory Chemistry | 5th Edition\n\n4 5 1 424 Reviews\n22\n5\nProblem 74P\n\nProblem 74P\n\nCalculate the pH of each solution.\n\n(a) [OH−] = 2.8 × 10−11 M\n\n(b) [OH−] = 9.6 × 10−3 M\n\n(c) [OH−] = 3.8 × 10−12 M\n\n(d) [OH−] = 6.4 × 10−4 M\n\nStep-by-Step Solution:\n\nSolution 74P :\n\nStep 1:\n\nHere, we have to pH of each solution :\n\npOH is the measure of concentration of hydroxide ion (OH-) in the solution. It is used to measure the alkalinity of a solution.\n\nTo calculate the pOH of a solution we should know the concentration of the hydroxide ion in moles per liter, i.e the molarity of the solution.\n\npOH is calculated using the expression:\npOH =  - log [OH-]\n\nStep 2 of 2\n\n##### ISBN: 9780321910295\n\nSince the solution to 74P from 14 chapter was answered, more than 320 students have viewed the full step-by-step answer. This full solution covers the following key subjects: calculate, solution. This expansive textbook survival guide covers 19 chapters, and 2046 solutions. Introductory Chemistry was written by and is associated to the ISBN: 9780321910295. This textbook survival guide was created for the textbook: Introductory Chemistry, edition: 5. The answer to “Calculate the pH of each solution.(a) [OH?] = 2.8 × 10?11 M(b) [OH?] = 9.6 × 10?3 M(c) [OH?] = 3.8 × 10?12 M(d) [OH?] = 6.4 × 10?4 M” is broken down into a number of easy to follow steps, and 30 words. The full step-by-step solution to problem: 74P from chapter: 14 was answered by , our top Chemistry solution expert on 05/06/17, 06:45PM.\n\nUnlock Textbook Solution" ]
[ null, "https://studysoup.com/cdn/24cover_2610068", null, "https://studysoup.com/cdn/24cover_2610068", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8898187,"math_prob":0.9371944,"size":539,"snap":"2021-21-2021-25","text_gpt3_token_len":185,"char_repetition_ratio":0.17570093,"word_repetition_ratio":0.0,"special_character_ratio":0.35435992,"punctuation_ratio":0.12295082,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990553,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-10T22:31:46Z\",\"WARC-Record-ID\":\"<urn:uuid:3cf38de5-7159-4ba3-9f01-7ecb94b87b3c>\",\"Content-Length\":\"79562\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9718f0c2-bf37-4a15-8ccc-2947aa648456>\",\"WARC-Concurrent-To\":\"<urn:uuid:a337d989-d683-411f-a104-d363a8804672>\",\"WARC-IP-Address\":\"54.189.254.180\",\"WARC-Target-URI\":\"https://studysoup.com/tsg/14454/introductory-chemistry-5-edition-chapter-14-problem-74p\",\"WARC-Payload-Digest\":\"sha1:J7VGO46R5EZTXSP2VJZ6QJEV5UIQYTUF\",\"WARC-Block-Digest\":\"sha1:MPX55XKBHX2E2H4K3SDSDSGYCP2BPRVT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989749.3_warc_CC-MAIN-20210510204511-20210510234511-00382.warc.gz\"}"}
https://docs.splunk.com/Documentation/Splunk/6.5.5/SearchReference/CommonEvalFunctions
[ "Splunk Enterprise version 6.x is no longer supported as of October 23, 2019. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.", null, "Download topic as PDF\n\n# Evaluation functions\n\n## Commands\n\nYou can use these functions with the `eval`, `fieldformat`, and ` where` commands, and as part of evaluation expressions.\n\n## Usage\n\n• All functions that accept strings can accept literal strings or any field.\n• All functions that accept numbers can accept literal numbers or any numeric field.\n\n### String arguments\n\nFor most evaluation functions, when a string argument is expected, you can specify either an explicit string or a field name. The explicit string is denoted by double quotation marks. In other words, when the function syntax specifies a string you can specify any expression that results in a string. For example, `name + \"server\"`.​\n\n### Nested functions\n\nYou can specify a function as an argument to another function.\n\nIn the following example, the `cidrmatch` function is used as the first argument in the `if` function.\n\n`... | eval isLocal=if(cidrmatch(\"123.132.32.0/25\",ip), \"local\", \"not local\")`\n\nThe following example shows how to use the `true()` function to provide a default to the `case` function.\n\n`... | eval error=case(status == 200, \"OK\", status == 404, \"Not found\", true(), \"Other\")`\n\n## Comparison and Conditional functions\n\nFunction Description Example(s)\n`case(X,\"Y\",...)` This function takes pairs of arguments X and Y. The X arguments are Boolean expressions that will be evaluated from first to last. When the first X expression is encountered that evaluates to TRUE, the corresponding Y argument will be returned. The function defaults to NULL if none are true. This example returns descriptions for the corresponding http status code:\n\n```... | eval description=case(error == 404, \"Not found\", error == 500, \"Internal Server Error\", error == 200, \"OK\")```\n\n`cidrmatch(\"X\",Y)` This function returns true, when IP address Y belongs to a particular subnet X. The function uses two string arguments: the first is the CIDR subnet; the second is the IP address to match. This function is compatible with IPv6. This example uses cidrmatch to set a field, `isLocal`, to \"local\" if the field `ip` matches the subnet, or \"not local\" if it does not:\n\n`... | eval isLocal=if(cidrmatch(\"123.132.32.0/25\",ip), \"local\", \"not local\")`\n\nThis example uses cidrmatch as a filter:\n\n`... | where cidrmatch(\"123.132.32.0/25\", ip)`\n\n`coalesce(X,...)` This function takes an arbitrary number of arguments and returns the first value that is not null. Let's say you have a set of events where the IP address is extracted to either `clientip` or `ipaddress`. This example defines a new field called `ip`, that takes the value of either `clientip` or `ipaddress`, depending on which is not NULL (exists in that event):\n\n`... | eval ip=coalesce(clientip,ipaddress)`\n\n`false()` This function enables you to specify a conditional that is obviously false, for example 1==0. You do not specify a field with this function.\n`if(X,Y,Z)` This function takes three arguments. The first argument X must be a Boolean expression. If X evaluates to TRUE, the result is the second argument Y. If, X evaluates to FALSE, the result evaluates to the third argument Z. This example looks at the values of error and returns err=OK if error=200, otherwise returns err=Error:\n\n`... | eval err=if(error == 200, \"OK\", \"Error\")`\n\n`like(TEXT, PATTERN)` This function takes two arguments, a string to match TEXT and a match expression string PATTERN.  It returns TRUE if and only if the first argument is like the SQLite pattern in Y.  The pattern language supports exact text match, as well as % characters for wildcards and _ characters for a single character match. This example returns islike=TRUE if the field value starts with foo:\n\n`... | eval is_a_foo=if(like(field, \"foo%\"), \"yes a foo\", \"not a foo\")`\n\nor\n\n`... | where like(field, \"foo%\")`\n\n`match(SUBJECT, \"REGEX\")` This function compares the regex string REGEX to the value of SUBJECT and returns a Boolean value. It returns true if the REGEX can find a match against any substring of SUBJECT. This example returns true IF AND ONLY IF field matches the basic pattern of an IP address. Note that the example uses ^ and \\$ to perform a full match.\n\n`... | eval n=if(match(field, \"^\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\$\"), 1, 0)`\n\nThis example uses the `match` function in an <eval-expression>. The SUBJECT is a calculated field called `test`. The \"REGEX\" is the string `yes`.\n\n`| makeresults | eval test=\"yes\" | eval matches = if(match(test,\"yes\"),1,0)`\n\nIf the value is stored with quotation marks, you must use the backslash ( \\ ) character to escape the quotation marks. For example: `| makeresults | eval test=\"\\\"yes\\\"\" | eval matches = if(match(test, \"\\\"yes\\\"\"), 1, 0) `\n\n`null()` This function takes no arguments and returns NULL. The evaluation engine uses NULL to represent \"no value\". Setting a field to NULL clears the field value.\n`nullif(X,Y)` This function is used to compare fields. The function takes two arguments, X and Y, and returns NULL if X = Y. Otherwise it returns X. `... | eval n=nullif(fieldA,fieldB)`\n`searchmatch(X)` This function takes one argument X, which is a search string. The function returns true IF AND ONLY IF the event matches the search string. `... | eval type=if(searchmatch(\"foo bar\"), \"type1\", \"type2\"))`\n`true()` This function enables you to specify a conditional that is obviously true, for example 1==1. You do not specify a field with this function. This example shows how to use the `true()` function to provide a default to a `case` function.\n\n`... | eval error=case(status == 200, \"OK\", status == 404, \"Not found\", true(), \"Other\")`\n\n`validate(X,Y,...)` This function takes pairs of arguments, Boolean expressions X and strings Y. The function returns the string Y corresponding to the first expression X that evaluates to False and defaults to NULL if all are True. This example runs a simple check for valid ports:\n\n```... | eval n=validate(isint(port), \"ERROR: Port is not an integer\", port >= 1 AND port <= 65535, \"ERROR: Port is out of range\")```\n\n## Conversion functions\n\nFunction Description Examples\n`tonumber(NUMSTR,BASE)`\n\n`tonumber(NUMSTR)`\n\nThis function converts the input string NUMSTR to a number. NUMSTR can be a field name or a value. BASE is optional and used to define the base of the number to convert to. BASE can be 2 to 36, and defaults to 10. If the `tonumber` function cannot parse a field value to a number, for example if the value contains a leading and trailing space, the function returns NULL. Use the `trim` function to remove leading or trailing spaces. If the `tonumber` function cannot parse a literal string to a number, it returns an error. This example returns the string values in the field store_sales:\n\n`... | eval n=tonumber(store_sales)`\n\nThis example returns \"164\":\n\n`... | eval n=tonumber(\"0A4\",16)`\n\nThis example trims any leading or trailing spaces from the values in the `celsius` field before converting it to a number:\n\n`... | eval temperature=tonumber(trim(celsius))`\n\n`tostring(X,Y)` This function converts the input value to a string. If the input value is a number, it reformats it as a string. If the input value is a Boolean value, it returns the corresponding string value, \"True\" or \"False\".\n\nThis function requires at least one argument X; if X is a number, the second argument Y is optional and can be `\"hex\"` `\"commas\"` or `\"duration\"`:\n\n`tostring(X,\"hex\")` converts X to hexadecimal.\n\n`tostring(X,\"commas\")` formats X with commas and, if the number includes decimals, rounds to nearest two decimal places.\n\n`tostring(X,\"duration\")` converts seconds X to readable time format HH:MM:SS.\n\nThis example returns \"True 0xF 12,345.68\":\n\n```... | eval n=tostring(1==1) + \" \" + tostring(15, \"hex\") + \" \" + tostring(12345.6789, \"commas\")```\n\nThis example returns `foo=615` and `foo2=00:10:15`:\n\n`... | eval foo=615 | eval foo2 = tostring(foo, \"duration\")`\n\nThis example formats the column totalSales to display values with a currency symbol and commas. You must use a period between the currency value and the `tostring` function.\n\n`...| fieldformat totalSales=\"\\$\".tostring(totalSales,\"commas\")`\n\nNote: When used with the `eval` command, the values might not sort as expected because the values are converted to ASCII. Use the `fieldformat` command with the `tostring` function to format the displayed values. The underlying values are not changed with the `fieldformat` command.\n\n## Cryptographic functions\n\nFunction Description Example(s)\n`md5(X)` This function computes and returns the MD5 hash of a string value X. `... | eval n=md5(field)`\n`sha1(X)` This function computes and returns the secure hash of a string value X based on the FIPS compliant SHA-1 hash function. `... | eval n=sha1(field)`\n`sha256(X)` This function computes and returns the secure hash of a string value X based on the FIPS compliant SHA-256 hash function. `... | eval n=sha256(field)`\n`sha512(X)` This function computes and returns the secure hash of a string value X based on the FIPS compliant SHA-512 hash function. `... | eval n=sha512(field)`\n\n## Date and Time functions\n\nIn addition to the functions listed in the following table, there are also variables and modifiers that you can use in searches. See Date and time format variables and Time modifiers in the Search Reference.\n\nFunction Description Example(s)\n`now()` This function takes no arguments and returns the time that the search was started. The time is represented in Unix time or in seconds since Epoch time.\n`relative_time(X,Y)` This function takes an epochtime time, X, as the first argument and a relative time specifier, Y, as the second argument and returns the epochtime value of Y applied to X. `... | eval n=relative_time(now(), \"-1d@d\")`\n`strftime(X,Y)` This function takes an epochtime value, X, as the first argument and renders it as a string using the format specified by Y. For a list and descriptions of format options, refer to the topic \"Common time format variables\". This example returns the hour and minute from the _time field:\n\n`... | eval n=strftime(_time, \"%H:%M\")`\n\n`strptime(X,Y)` This function takes a time represented by a string, X, and parses it into a timestamp using the format specified by Y. For a list and descriptions of format options, refer to the topic \"Common time format variables\". If timeStr is in the form, \"11:59\", this returns it as a timestamp:\n\n`... | eval n=strptime(timeStr, \"%H:%M\")`\n\n`time()` This function returns the wall-clock time with microsecond resolution. The value of time() will be different for each event based on when that event was processed by the `eval` command.\n\n## Informational functions\n\nFunction Description Examples\n`isbool(X)` This function takes one argument X and evaluates whether X is a Boolean data type. The function returns TRUE if X is Boolean. Use this function with functions that return Boolean data types, such as `cidrmatch` and `mvfind`.\n\nThis function cannot be used to determine if field values are \"true\" or \"false\" because field values are either string or number data types. Instead, use syntax such as `<fieldname>=true OR <fieldname>=false` to determine field values.\n\n`isint(X)` This function takes one argument X and returns TRUE if X is an integer. `... | eval n=if(isint(field), \"int\", \"not int\")`\n\nor\n\n`... | where isint(field)`\n\n`isnotnull(X)` This function takes one argument X and returns TRUE if X is not NULL. This is a useful check for whether or not a field (X) contains a value. `... | eval n=if(isnotnull(field),\"yes\",\"no\")`\n\nor\n\n`... | where isnotnull(field)`\n\n`isnull(X)` This function takes one argument X and returns TRUE if X is NULL. `... | eval n=if(isnull(field),\"yes\",\"no\")`\n\nor\n\n`... | where isnull(field)`\n\n`isnum(X)` This function takes one argument X and returns TRUE if X is a number. `... | eval n=if(isnum(field),\"yes\",\"no\")`\n\nor\n\n`... | where isnum(field)`\n\n`isstr(X)` This function takes one argument X and returns TRUE if X is a string. `... | eval n=if(isstr(field),\"yes\",\"no\")`\n\nor\n\n`... | where isstr(field)`\n\n`typeof(X)` This function takes one argument and returns a string representation of its type. This example returns \"NumberStringBoolInvalid\":\n\n`... | eval n=typeof(12) + typeof(\"string\") + typeof(1==2) + typeof(badfield)`\n\n## Mathematical functions\n\nFunction Description Examples\n`abs(X)` This function takes a number X and returns its absolute value. This example returns the absnum, whose values are the absolute values of the numeric field `number`:\n\n`... | eval absnum=abs(number)`\n\n`ceiling(X)` This function rounds a number X up to the next highest integer. This example returns n=2:\n\n`... | eval n=ceil(1.9)`\n\n`exact(X)` This function renders the result of a numeric eval calculation with a larger amount of precision in the formatted output. `... | eval n=exact(3.14 * num)`\n`exp(X)` This function takes a number X and returns the exponential function `eX`. The following example returns y=e3:\n\n`... | eval y=exp(3)`\n\n`floor(X)` This function rounds a number X down to the nearest whole integer. This example returns 1:\n\n`... | eval n=floor(1.9)`\n\n`ln(X)` This function takes a number X and returns its natural log. This example returns the natural log of the values of bytes:\n\n`... | eval lnBytes=ln(bytes)`\n\n`log(X,Y)`\n\n`log(X)`\n\nThis function takes either one or two numeric arguments and returns the log of the first argument X using the second argument Y as the base. If the second argument Y is omitted, this function evaluates the log of number X with base 10. `... | eval num=log(number,2)`\n`pi()` This function takes no arguments and returns the constant pi to 11 digits of precision. `... | eval area_circle=pi()*pow(radius,2)`\n`pow(X,Y)` This function takes two numeric arguments X and Y and returns XY. `... | eval area_circle=pi()*pow(radius,2)`\n`round(X,Y)` This function takes one or two numeric arguments X and Y, returning X rounded to the amount of decimal places specified by Y. The default is to round to an integer. This example returns n=4:\n\n`... | eval n=round(3.5)`\n\nThis example returns n=2.56:\n\n`... | eval n=round(2.555, 2)`\n\n`sigfig(X)` This function takes one argument X, a number, and rounds that number to the appropriate number of significant figures. `1.00*1111 = 1111`, but\n\n`... | eval n=sigfig(1.00*1111)`\n\nreturns n=1110.\n\n`sqrt(X)` This function takes one numeric argument X and returns its square root. This example returns 3:\n\n`... | eval n=sqrt(9)`\n\n## Multivalue functions\n\nFunction Description Example(s)\n`commands(X)` This function takes a search string, or field that contains a search string, X and returns a multivalued field containing a list of the commands used in X. (This is generally not recommended for use except for analysis of audit.log events.) `... | eval x=commands(\"search foo | stats count | sort count\")`\n\nreturns a multivalued field X, that contains 'search', 'stats', and 'sort'.\n\n`mvappend(X,...)` This function takes an arbitrary number of arguments and returns a multivalue result of all the values. The arguments can be strings, multivalue fields or single value fields. `... | eval fullName=mvappend(initial_values, \"middle value\", last_values)`\n`mvcount(MVFIELD)` This function takes a field MVFIELD. The function returns the number of values if it is a multivalue, 1 if it is a single value field, and NULL otherwise. `... | eval n=mvcount(multifield)`\n`mvdedup(X)` This function takes a multivalue field X and returns a multivalue field with its duplicate values removed. `... | eval s=mvdedup(mvfield)`\n`mvfilter(X)` This function filters a multivalue field based on an arbitrary Boolean expression X. The Boolean expression X can reference ONLY ONE field at a time.\n\nNote:This function will return NULL values of the field `x` as well. If you don't want the NULL values, use the expression: `mvfilter(x!=NULL)`.\n\nThis example returns all of the values in field email that end in .net or .org:\n\n`... | eval n=mvfilter(match(email, \"\\.net\\$\") OR match(email, \"\\.org\\$\"))`\n\n`mvfind(MVFIELD,\"REGEX\")` This function tries to find a value in multivalue field X that matches the regular expression REGEX. If a match exists, the index of the first matching value is returned (beginning with zero). If no values match, NULL is returned. `... | eval n=mvfind(mymvfield, \"err\\d+\")`\n`mvindex(MVFIELD,STARTINDEX, ENDINDEX)`\n\n`mvindex(MVFIELD,STARTINDEX)`\n\nThis function takes two or three arguments, field MVFIELD and numbers STARTINDEX and ENDINDEX, and returns a subset of the multivalue field using the indexes provided.\n\nFor `mvindex(mvfield, startindex, [endindex])`, endindex is inclusive and optional. Both startindex and endindex can be negative, where -1 is the last element. If endindex is not specified, it returns only the value at startindex. If the indexes are out of range or invalid, the result is NULL.\n\nSince indexes start at zero, this example returns the third value in \"multifield\", if it exists:\n\n`... | eval n=mvindex(multifield, 2)`\n\n`mvjoin(MVFIELD,STR)` This function takes two arguments, multivalue field MVFIELD and string delimiter STR. The function concatenates the individual values of MVFIELD with copies of STR in between as separators. This example joins together the individual values of \"foo\" using a semicolon as the delimiter:\n\n`... | eval n=mvjoin(foo, \";\")`\n\n`mvrange(X,Y,Z)` This function creates a multivalue field for a range of numbers. This function can contain up to three arguments: a starting number X, an ending number Y (exclusive), and an optional step increment Z. If the increment is a timespan such as '7'd, the starting and ending numbers are treated as epoch times. This example returns a multivalue field with the values 1, 3, 5, 7, 9.\n\n`... | eval mv=mvrange(1,11,2)`\n\n`mvsort(X)` This function uses a multivalue field X and returns a multivalue field with the values sorted lexicographically. `... | eval s=mvsort(mvfield)`\n`mvzip(X,Y,\"Z\")` This function takes two multivalue fields, X and Y, and combines them by stitching together the first value of X with the first value of field Y, then the second with the second, and so on. The third argument, Z, is optional and is used to specify a delimiting character to join the two values. The default delimiter is a comma. This is similar to Python's zip command. `... | eval nserver=mvzip(hosts,ports)`\n\n## Statistical functions\n\nIn addition to these functions, a comprehensive set of statistical functions is available to use with the stats, chart, and related commands.\n\nFunction Description Example(s)\n`max(X,...)` This function takes an arbitrary number of numeric or string arguments, and returns the max; strings are greater than numbers. This example returns either \"foo\" or field, depending on the value of field:\n\n`... | eval n=max(1, 3, 6, 7, \"foo\", field)`\n\n`min(X,...)` This function takes an arbitrary number of numeric or string arguments, and returns the min; strings are greater than numbers. This example returns either 1 or field, depending on the value of field:\n\n`... | eval n=min(1, 3, 6, 7, \"foo\", field)`\n\n`random()` This function takes no arguments and returns a pseudo-random integer ranging from zero to 231-1, for example: 0…2147483647\n\n## Text functions\n\nFunction Description Examples\n`len(X)` This function returns the character length of a string X. `... | eval n=len(field)`\n`lower(X)` This function takes one string argument and returns the lowercase version. The upper() function also exists for returning the uppercase version. This example returns the value provided by the field username in lowercase.\n\n`... | eval username=lower(username)`\n\n`ltrim(X,Y)`\n\n`ltrim(X)`\n\nThis function takes one or two arguments X and Y and returns X with the characters in Y trimmed from the left side. If Y is not specified, spaces and tabs are removed. This example returns x=\"abcZZ\":\n\n`... | eval x=ltrim(\" ZZZZabcZZ \", \" Z\")`\n\n`replace(X,Y,Z)` This function returns a string formed by substituting string Z for every occurrence of regex string Y in string X. The third argument Z can also reference groups that are matched in the regex. This example returns date with the month and day numbers switched, so if the input was 1/14/2015 the return value would be 14/1/2015:\n\n`... | eval n=replace(date, \"^(\\d{1,2})/(\\d{1,2})/\", \"\\2/\\1/\")`\n\n`rtrim(X,Y)`\n\n`rtrim(X)`\n\nThis function takes one or two arguments X and Y and returns X with the characters in Y trimmed from the right side. If Y is not specified, spaces and tabs are removed. This example returns n=\"ZZZZabc\":\n\n`... | eval n=rtrim(\" ZZZZabcZZ \", \" Z\")`\n\n`spath(X,Y)` This function takes two arguments: an input source field X and an spath expression Y, that is the XML or JSON formatted location path to the value that you want to extract from X. If Y is a literal string, it needs quotes, `spath(X,\"Y\")`. If Y is a field name (with values that are the location paths), it doesn't need quotes. This may result in a multivalued field. Read more about the `spath` search command. This example returns the values of locDesc elements:\n\n`... | eval locDesc=spath(_raw, \"vendorProductSet.product.desc.locDesc\")`\n\nThis example returns the hashtags from a twitter event:\n\n`index=twitter | eval output=spath(_raw, \"entities.hashtags\")`\n\n`split(X,\"Y\")` This function takes two arguments, field X and delimiting character Y. It splits the value(s) of X on the delimiter Y and returns X as a multivalue field. `... | eval n=split(foo, \";\")`\n`substr(X,Y,Z)` This function takes either two or three arguments, where X is a string and Y and Z are numeric. It returns a substring of X, starting at the index specified by Y with the number of characters specified by Z. If Z is not given, it returns the rest of the string.\n\nThe indexes follow SQLite semantics; they start at 1. Negative indexes can be used to indicate a start from the end of the string.\n\nThis example concatenates \"str\" and \"ing\" together, returning \"string\":\n\n`... | eval n=substr(\"string\", 1, 3) + substr(\"string\", -3)`\n\n`trim(X,Y)`\n\n`trim(X)`\n\nThis function takes one or two arguments X and Y and returns X with the characters in Y trimmed from both sides. If Y is not specified, spaces and tabs are removed. This example returns \"abc\":\n\n`... | eval n=trim(\" ZZZZabcZZ \", \" Z\")`\n\n`upper(X)` This function takes one string argument and returns the uppercase version. The lower() function also exists for returning the lowercase version. This example returns the value provided by the field username in uppercase.\n\n`... | eval n=upper(username)`\n\n`urldecode(X)` This function takes one URL string argument X and returns the unescaped or decoded URL string. This example returns \"http://www.splunk.com/download?r=header\":\n\n```... | eval n=urldecode(\"http%3A%2F%2Fwww.splunk.com %2Fdownload%3Fr%3Dheader\")```\n\n## Trigonometry and Hyperbolic functions\n\nFunction Description Examples\n`acos(X)` This function computes the arc cosine of X, in the interval [0,pi] radians. `... | eval n=acos(0)`\n\n`... | eval degrees=acos(0)*180/pi()`\n\n`acosh(X)` This function computes the arc hyperbolic cosine of X, in radians. `... | eval n=acosh(2)`\n`asin(X)` This function computes the arc sine of X, in the interval [-pi/2,+pi/2] radians. `... | eval n=asin(1)`\n\n`... | eval degrees=asin(1)*180/pi()`\n\n`asinh(X)` This function computes the arc hyperbolic sine of X, in radians. `... | eval n=asinh(1)`\n`atan(X)` This function computes the arc tangent of X, in the interval [-pi/2,+pi/2] radians. `... | eval n=atan(0.50)`\n`atan2(Y, X)` This function computes the arc tangent of Y, X in the interval [-pi,+pi] radians. Y is a value that represents the proportion of the y-coordinate. X is the value that represents the proportion of the x-coordinate.\n\nTo compute the value, the function takes into account the sign of both arguments to determine the quadrant.\n\n`.. | eval n=atan2(0.50, 0.75)`\n`atanh(X)` This function computes the arc hyperbolic tangent of X, in radians. `... | eval n=atanh(0.500)`\n`cos(X)` This function computes the cosine of an angle of X radians. `... | eval n=cos(-1)`\n\n`... | eval n=cos(pi())`\n\n`cosh(X)` This function computes the hyperbolic cosine of X radians. `... | eval n=cosh(1)`\n`hypot(X,Y)` This function computes the hypotenuse of a right-angled triangle whose legs are X and Y.\n\nThe function returns the square root of the sum of the squares of X and Y, as described in the Pythagorean theorem.\n\n`... | eval n=hypot(3,4)`\n`sin(X)` This function computes the sine. `... | eval n=sin(1)`\n\n`... | eval n=sin(90 * pi()/180)`\n\n`sinh(X)` This function computes the hyperbolic sine. `... | eval n=sinh(1)`\n`tan(X)` This function computes the tangent. `... | eval n=tan(1)`\n`tanh(X)` This function computes the hyperbolic tangent. `... | eval n=tanh(1)`\n\nHave questions? Visit Splunk Answers and search for a specific function or command." ]
[ null, "https://docs.splunk.com/skins/OxfordComma/images/acrobat-logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5978854,"math_prob":0.97878426,"size":20474,"snap":"2021-31-2021-39","text_gpt3_token_len":5050,"char_repetition_ratio":0.2546165,"word_repetition_ratio":0.16143234,"special_character_ratio":0.25432256,"punctuation_ratio":0.162868,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9947338,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T00:43:56Z\",\"WARC-Record-ID\":\"<urn:uuid:0f057f2b-7030-434c-b222-da004a5e9bf3>\",\"Content-Length\":\"220421\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1a003452-340d-4c7d-907f-adea2d74825c>\",\"WARC-Concurrent-To\":\"<urn:uuid:57f1b6b0-fa4c-4ce1-a431-9337d6502fc8>\",\"WARC-IP-Address\":\"52.24.50.209\",\"WARC-Target-URI\":\"https://docs.splunk.com/Documentation/Splunk/6.5.5/SearchReference/CommonEvalFunctions\",\"WARC-Payload-Digest\":\"sha1:WIIRXRAJ5OZBTI37ZZAYCLFPIJ33IF3J\",\"WARC-Block-Digest\":\"sha1:SMPKVUWBTKTKRY35ZZUOPJ3VAEHOUK7V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154408.7_warc_CC-MAIN-20210802234539-20210803024539-00492.warc.gz\"}"}
https://numbermatics.com/n/5116140/
[ "# 5116140\n\n## 5,116,140 is an even composite number composed of five prime numbers multiplied together.\n\nWhat does the number 5116140 look like?\n\nThis visualization shows the relationship between its 5 prime factors (large circles) and 72 divisors.\n\n5116140 is an even composite number. It is composed of five distinct prime numbers multiplied together. It has a total of seventy-two divisors.\n\n## Prime factorization of 5116140:\n\n### 22 × 32 × 5 × 43 × 661\n\n(2 × 2 × 3 × 3 × 5 × 43 × 661)\n\nSee below for interesting mathematical facts about the number 5116140 from the Numbermatics database.\n\n### Names of 5116140\n\n• Cardinal: 5116140 can be written as Five million, one hundred sixteen thousand, one hundred forty.\n\n### Scientific notation\n\n• Scientific notation: 5.11614 × 106\n\n### Factors of 5116140\n\n• Number of distinct prime factors ω(n): 5\n• Total number of prime factors Ω(n): 7\n• Sum of prime factors: 714\n\n### Divisors of 5116140\n\n• Number of divisors d(n): 72\n• Complete list of divisors:\n• Sum of all divisors σ(n): 15903888\n• Sum of proper divisors (its aliquot sum) s(n): 10787748\n• 5116140 is an abundant number, because the sum of its proper divisors (10787748) is greater than itself. Its abundance is 5671608\n\n### Bases of 5116140\n\n• Binary: 100111000010000111011002\n• Base-36: 31NN0\n\n### Squares and roots of 5116140\n\n• 5116140 squared (51161402) is 26174888499600\n• 5116140 cubed (51161403) is 133914394048343544000\n• The square root of 5116140 is 2261.8885914209\n• The cube root of 5116140 is 172.3114511993\n\n### Scales and comparisons\n\nHow big is 5116140?\n• 5,116,140 seconds is equal to 8 weeks, 3 days, 5 hours, 9 minutes.\n• To count from 1 to 5,116,140 would take you about twelve weeks!\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 5116140 cubic inches would be around 14.4 feet tall.\n\n### Recreational maths with 5116140\n\n• 5116140 backwards is 0416115\n• 5116140 is a Harshad number.\n• The number of decimal digits it has is: 7\n• The sum of 5116140's digits is 18\n• More coming soon!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78053933,"math_prob":0.9779443,"size":3492,"snap":"2021-31-2021-39","text_gpt3_token_len":1059,"char_repetition_ratio":0.124139905,"word_repetition_ratio":0.036585364,"special_character_ratio":0.41237113,"punctuation_ratio":0.14583333,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9931103,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T14:39:02Z\",\"WARC-Record-ID\":\"<urn:uuid:33e19553-cfec-4e48-9808-6ac2b5214b23>\",\"Content-Length\":\"22185\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:44f2e1f8-2396-4569-8b41-8a924e7b7c70>\",\"WARC-Concurrent-To\":\"<urn:uuid:edd460ff-dc36-44c4-83c2-369974c70ebb>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/5116140/\",\"WARC-Payload-Digest\":\"sha1:Y7KCZJEWHZVIU4E2DZHM2JIOK7WJNFWT\",\"WARC-Block-Digest\":\"sha1:GON6VGXQNK6VWCSXKM2IDGZR4AWBBZKQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057039.7_warc_CC-MAIN-20210920131052-20210920161052-00414.warc.gz\"}"}
https://cyberleninka.org/article/n/1221015
[ "# On Mixed Precision Iterative Refinement for Eigenvalue ProblemsAcademic research paper on \"Earth and related environmental sciences\"", null, "CC BY-NC-ND", null, "0", null, "0\nShare paper\nProcedia Computer Science\nOECD Field of science\nKeywords\n{\"iterative refinement\" / \"eigenvalue problems\" / \"mixed precision\" / \"saddle point/equilibrium problems\"}\n\n## Abstract of research paper on Earth and related environmental sciences, author of scientific article — Karl E. Prikopa, Wilfried N. Gansterer\n\nAbstract We investigate novel iterative refinement methods for solving eigenvalue problems which are derived from Newton's method. In particular, approaches for the solution of the resulting linear system based on saddle point problems are compared and evaluated. The algorithms presented exploit the performance benefits of mixed precision, where the majority of operations are performed at a lower working precision and only critical steps within the algorithm are computed in a higher target precision, leading to a solution which is accurate to the target precision. A complexity analysis shows that the best novel method presented requires fewer floating point operations than the so far only existing iterative refinement eigensolver by Dongarra, Moler and Wilkinson.\n\n## Academic research paper on topic \"On Mixed Precision Iterative Refinement for Eigenvalue Problems\"\n\nAvailable online at www.sciencedirect.com\n\nSciVerse ScienceDirect\n\nProcedía Computer Science 18 (2013) 2647 - 2650\n\nInternational Conference on Computational Science, ICCS 2013\n\nOn Mixed Precision Iterative Refinement for Eigenvalue Problems\n\nKarl E. Prikopaa, Wilfried N. Gansterera*\n\na University of Vienna, Research Group Theory and Applications of Algorithms, Austria\n\nAbstract\n\nWe investigate novel iterative refinement methods for solving eigenvalue problems which are derived from Newton's method. In particular, approaches for the solution of the resulting linear system based on saddle point problems are compared and evaluated. The algorithms presented exploit the performance benefits of mixed precision, where the majority of operations are performed at a lower working precision and only critical steps within the algorithm are computed in a higher target precision, leading to a solution which is accurate to the target precision. A complexity analysis shows that the best novel method presented requires fewer floating point operations than the so far only existing iterative refinement eigensolver by Dongarra, Moler and Wilkinson.\n\nKeywords: iterative refinement; eigenvalue problems; mixed precision; saddle point/equilibrium problems\n\n1. Introduction\n\nWe investigate novel iterative refinement methods for solving the eigenvalue problem Ax = Ax with symmetric A e Rnxn based on Newton's method for solving non-linear equations. Iterative refinement for linear systems is a strategy for improving the accuracy of a computed solution by trying to reduce round-off errors. The method iteratively computes a correction term to an approximate solution by solving a linear system using the residual of the result as the right-hand side. Mixed precision iterative refinement for solving linear systems of equations is a special performance-oriented case of iterative refinement where the majority of operations is performed in single precision while still achieving double precision accuracy.\n\nThis paper introduces a mixed precision iterative refinement method for eigenvalue problems by returning to the origin of iterative refinement, Newton's method, which leads to linear systems changing in each iteration for each eigenpair. The complexity of solving the resulting linear systems for all n eigenpairs directly would be O(n4), which is too high for an efficient eigenvalue iterative refinement method. Therefore we exploit solvers for equilibrium problems to reduce the complexity and find an efficient solution for the linear systems.\n\n2. Related Work\n\nDongarra, Moler and Wilkinson described an iterative refinement method for improving the numerical accuracy of eigenvalues and eigenvectors, based on the same concept as iterative refinement for linear systems\n\n* Corresponding author. Tel.: +43-1-4277-78311 E-mail address: [email protected].\n\nSelection and peer review under responsibility of the organizers of the 2013 International Conference on Computational Science doi: 10.1016/j.procs.2013.06.002\n\n. Their eigenvalue iterative refinement [3, 4] improves eigenvalues and either improves or computes the corresponding eigenvectors. The method is divided into two parts : the pre-SICE phase and the SICE phase. In the pre-SICE phase the matrix is factored in single precision using the Schur decomposition A = QUQ*, where U is an upper triangular matrix and Q is a unitary matrix. The SICE phase then uses the results from the Schur decomposition in combination with triangularizations using plane rotations to improve the approximate eigenvalues by iteratively solving a linear system for a correction term using the residual r = Ax - Ax as the right hand-side.\n\n3. Computational Cost of Existing Eigenvalue Iterative Refinement\n\nIn the LAPACK User's Guide , the LAPACK eigenvalue solver for symmetric matrices xSYEVD is described to have a floating point operation count of 9n3 for computing the eigenvalues and the right eigenvectors and 1.33n3 operations for computing the eigenvalues only. As described in , the pre-SICE phase requires 10n3 + 30n2 fused-multiply add (FMA) operations, which corresponds well with the flop count of the LAPACK function for computing the eigenvalues of a general matrix. The SICE phase requires 13n2 FMA operations per iteration. The author states that on average 3 iterations are needed to improve an eigenpair. Our experiments have shown that while this is correct for small matrices with n = 10, the number of iterations required increases with the matrix size, for example, a matrix with n = 1000 requires on average 4.77 iterations to reach convergence.\n\nThe method described in [3, 4] computes the majority of operations, the Schur decomposition and solving the linear systems, in single precision (SP) and only a few operations, computing the residual and updating the result, use double precision (DP) to achieve the target accuracy of the eigenpairs. To compute all n eigenpairs to DP accuracy, the total number of operations is (10 + 13k)n3 FMA operations with k being the average number of iterations and kn3 operations being executed using DP to compute the residual. Estimating the performance difference between SP and DP to be a factor of 2, the algorithm would perform (5 + 7k)n3 DP operations. Thus, for k = 5 the flop count is higher than for the LAPACK eigenvalue solver for general matrices xGEEV , which requires 26.33n3 FLOPs to compute the eigenvalues and eigenvectors.\n\n4. Newton's Method for Iterative Refinement Eigensolver\n\nIterative refinement is based on Newton's method for solving non-linear equations, which finds a root of a function f(x) using the iterative process xk+1 = xk - f(xk)/f (xk). In higher dimensions, f (x) is the Jacobian matrix Jf (x) which results in a linear system of equations. In iterative refinement, the function f (x) is the residual of the solution which is being improved. The residual of eigenvalue problems can be expressed for each eigenpair as Ax - Ax. In , the function f is expanded by the additional condition xTx - 1 to normalize the eigenvector x. For an eigenpair the function is defined as f (x, A) = ( Ax - Ax xTx - 1 ) . f (x, A) = 0 if and only if Ax = Ax\n\nand xrx = 1, which requires x to be normalized. An alternative approach is fix, A) = ( Ax - Ax ^ _\n\nI A - AI -x \\\n\nIntroducing the factor -0.5 leads to a symmetric Jacobian matrix Jf (x, A) = 1 -xT 0 ). This structure can be\n\nexploited by special system solvers, as will be shown in Section 5. The correction term A(xk, Ak) = ( Axk AAk ) consists of a correction for the eigenvector and for the eigenvalue and is found by solving the linear system Jf (xk, Ak) A(xk, Ak) = f (xk, Ak). Then, the approximate solution from the previous iteration is updated according to (xk+1 Ak+1)T = (xk Ak)T + A(xk, Ak).\n\nThe eigenvalue iterative refinement takes an approximate eigenvalue and a random vector as its input and each eigenpair is refined separately. Based on experimental observations, compared to , the rate of convergence can be improved significantly by normalising the eigenvector xk before constructing and solving the linear system. For each eigenpair a new linear system has to be solved in each iteration because the improved eigenvalue and eigenvector appear in the function f (x, A) and in its Jacobian. Therefore the system of equations changes in each iteration. If the linear system was factorized, this would lead to a complexity of O(n3) for improving a single eigenpair, resulting in a complexity of O(n4) for improving all eigenpairs multiplied by the number of iterations required to reach a target accuracy. This is too high for an efficient eigenvalue iterative refinement method. The Jacobian matrices are saddle point matrices [7, 8, 9] and corresponding solvers can be used to reduce the complexity.\n\nThe special structure of saddle point matrices can be exploited when solving such systems. There are direct and iterative methods for solving equilibrium problems. A direct method is the range space method , which assumes a symmetric saddle point matrix. Applying the range space method to iterative refinement solves the eigenvalue problem. This does not yet reduce the complexity of the iterative refinement method because multiple linear systems have to be solved for the range space method. The range space method only requires a solver for the shifted linear system A - AI.\n\nThe initial approximation for the eigenvalues can be computed using the Schur factorization A = QUQT in single precision. The Schur factors can then be used to solve the linear systems by applying the shift to U and inverting U - AI. This removes the necessity of a decomposition in each iteration and reduces the complexity for improving an eigenpair to O(n2). As we consider A being symmetric in this paper, the Schur factor U is a diagonal matrix and the shifted system can be inverted at a very low cost of n operations. The orthogonal matrix Q from the Schur factorization can be used as the initial approximation of the eigenvectors, but random values can also be used instead although it will increase the number of iterations required until convergence. This behaviour will be shown in Section 6. When computing U - AI, the approximate eigenvalue is subtracted from the diagonal of U. This results in U becoming singular, causing the inversion to fail. A small correction 6 has to be introduced when subtracting the eigenvalue from the diagonal of U to ensure non-singularity.\n\nAcquiring the initial approximate eigenvalues through the Schur decomposition requires 9n3 SP operations . For each eigenpair, computing the residual costs n2 DP operations. The range space method requires the solution of three linear systems in SP using the already available Schur factors, each solution therefore consisting of two matrix-vector operations (2n2) and a back-substitution (n2/2) to invert U - AI. The total number of operations is 8.5n2 per each iteration for each eigenpair. Taking into account the mixed precision computation with a speed-up factor 2 for SP, the total number of operations would be further reduced to (4.5 + 4.75k)n3 DP operations. Due to A being symmetric, the Schur factor U becomes a diagonal matrix and a matrix-vector operation is reduced to n multiplications, leading to (9 + 7k)n3 operations and considering SP to (4.5 + 4k)n3 DP operations. Thus, the new method has a lower complexity than the algorithm in with (5 + 7k)n3 DP operations as shown in Section 2.\n\nAnother factorization of a symmetric matrix A is the Householder tridiagonalization A = QTQT, with T being tridiagonal and Q the product of the Householder transformations. As described previously for the Schur decomposition, the shifted linear systems can be solved analogously by applying the shift to T, again resulting in the reduced complexity of O(n2) for each improved eigenpair.\n\nThe approximate eigenvalues are obtained computing the Householder tridiagonalization in 4n3/3 SP operations followed by the Pan-Walker-Kahan QR algorithm with a complexity of O(n2) . The product Q of the Householder transformations is needed explicitly for solving the shifted linear systems, which requires 4n3/3 SP operations. The three linear systems solved for the range space method consist of two matrix-vector operations (2n2) and a bidiagonalization to solve the tridiagonal shifted system T - AI with a complexity of O(n). This totals to 8n3/3 + 7kn3 operations. Applying mixed precision, only the computation of the residual requires DP with n2 operations, reducing the total number of DP operations to (4/3 + 4k)n3. If the number of iterations k was to be the same for all discussed approaches, the Householder approach would lead to the lowest number of operations.\n\n6. Experiments and Initial Results\n\nThe experiments compare the eigenvalue iterative refinement by Dongarra, Moler and Wilkinson (SICEDR), and the saddle point approaches with Schur factorization (SPSIR) and Householder tridiagonalization (SPHIR). Almost all experiments were conducted using Matlab 2010a with the exception of SICEDR which was implemented in C. The initial experiments summarized in this section focus on the number of iterations required for convergence and the accuracy of the results, which is compared based on the relative residual. The iterative process terminates if the correction term ||A(x, A)||TO is less than a defined threshold e and a predefined maximum number of iterations is set as an additional termination criterion. For our experiments random, symmetric matrices are used, e = 10-12 and the maximum number of iterations is set to 20.\n\nFigure 1 shows the average number of iterations per eigenpair for different methods. The comparison cannot be limited to the number of iterations but has to also include the total floating point operations. The DP operations\n\nSPSIR (init. with random vector) SPSIR (init. with Q)\n\nsicedR\n\n0 50 100 150 200 250 300 350 400 450 500 Size n\n\nFig. 1. Average number of iterations for different system sizes n\n\n10-4 10-6 10-8 10-10 10-12\n\n1 sPhir\n\nSPSIR (init. with random) SPSIR (init. with Q)\n\nIteration number\n\nFig. 2. Comparison of the convergence history for the eigenpair (2o, xo) of a symmetric matrix with n = 1000\n\nfor the mixed precision methods for n = 500 based on the operations count described in the previous sections are: SICEDR with k = 4.23 requires 34.61n3 FLOPs, SPSIR with k = 5.44 requires 30.34n3 FLOPs and SPHIR with k = 9.22 requires 38.21n3 FLOPs. Note that the average iteration count k for SPSIR initialised with Q is slightly higher compared to SICEDR, but the total operations count is lower.\n\nIn Figure 2, the convergence history for the first eigenpair A0, x0 of a matrix with n = 1000 is shown for all methods in terms of the relative residual. SPSIR initialized with the Schur factor Q only requires 3 iterations. A random vector as the starting vector leads to a higher iteration count, achieving the target precision in 5 iterations. SPHIR does not have an approximation for the eigenvectors and therefore starts with a random vector, converging in 5 iterations.\n\n7. Conclusion\n\nNew approaches for mixed precision eigenvalue iterative refinement have been presented based on the solution of saddle point problems. The range space method is used in combination with the Schur factorization and Householder tridiagonalization to solve the resulting non-constant linear systems. It has been shown that the number of floating point operations is lower than the previously described iterative refinement method by Dongarra, Moler and Wilkinson , even though the number of iterations is higher. Future research will have to analyse the convergence, the numerical accuracy and the performance of the presented methods. The behaviour for non-symmetric system matrices and other saddle point solvers are future topics of interest.\n\nAcknowledgements\n\nThis work has been partly supported by the Austrian Science Fund (FWF) in project S10608 (NFN SISE).\n\nReferences\n\n J. Wilkinson, Rounding Errors in Algebraic Processes, Her Majesty's Stationery Office, London, 1963.\n\n A. Buttari, J. Dongarra, J. Kurzak, P. Luszczek, S. Tomov, Using mixed precision for sparse matrix computations to enhance the performance while achieving 64-bit accuracy, ACM Trans. Math. Softw. 34 (4) (2008) 17:1-17:22.\n\n J. J. Dongarra, C. B. Moler, J. H. Wilkinson, Improving the Accuracy of Computed Eigenvalues and Eigenvectors, SIAM Journal on Numerical Analysis 20 (1) (1983) 23.\n\n J. Dongarra, Algorithm 589: SICEDR: A FORTRAN subroutine for improving the accuracy of computed matrix eigenvalues, ACM Transactions on Mathematical Software (TOMS) 8 (4) (1982) 371-375.\n\n E. Anderson, Z. Bai, C. Bischof, L. S. Blackford, J. Demmel, J. J. Dongarra, J. Du Croz, S. Hammarling, A. Greenbaum, A. McKenney, D. Sorensen, LAPACK Users' guide (third ed.), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1999.\n\n H. Rock, Finding an Eigenvector and Eigenvalue, with Newtons Method for Solving Systems of Nonlinear Equations, Tech. rep., Department of Scientific Computing, University of Salzburg, Salzburg (2003).\n\n S. A. Vavasis, Stable numerical algorithms for equilibrium systems, SIAM J. Matrix Anal. Appl 15 (1992) 1108-1131.\n\n M. Benzi, G. H. Golub, J. Liesen, Numerical solution of saddle point problems, Acta Numerica 14 (2005) 1-137.\n\n W. Gansterer, J. Schneid, C. W. Ueberhuber, Mathematical properties of equilibrium systems, Technical report (2003).\n\n G. Golub, C. Van Loan, Matrix Computations, Johns Hopkins University Press, 1996." ]
[ null, "https://cyberleninka.org/images/tsvg/cc-label.svg", null, "https://cyberleninka.org/images/tsvg/view.svg", null, "https://cyberleninka.org/images/tsvg/download.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8623768,"math_prob":0.96439517,"size":16702,"snap":"2023-14-2023-23","text_gpt3_token_len":3831,"char_repetition_ratio":0.17049946,"word_repetition_ratio":0.02762013,"special_character_ratio":0.22176985,"punctuation_ratio":0.11473788,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99406767,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T09:55:31Z\",\"WARC-Record-ID\":\"<urn:uuid:ac5e57ef-4a4b-4ad7-8089-57b26a3ad1e7>\",\"Content-Length\":\"44349\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8911c8d-4903-479d-aa07-0bf4109dbabf>\",\"WARC-Concurrent-To\":\"<urn:uuid:46e066d2-e65e-472b-bcd6-7cf746d945e2>\",\"WARC-IP-Address\":\"159.69.2.174\",\"WARC-Target-URI\":\"https://cyberleninka.org/article/n/1221015\",\"WARC-Payload-Digest\":\"sha1:527WEI6NQAVRLFDBHBAKZU4T6GMPEV7E\",\"WARC-Block-Digest\":\"sha1:ECYRRSDT2FSGVZXLSY6QCP56JPQVXHN6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224643663.27_warc_CC-MAIN-20230528083025-20230528113025-00584.warc.gz\"}"}
http://qs321.pair.com/~monkads/?node_id=1131783
[ "", null, "Perl: the Markov chain saw PerlMonks\n\n### Re^11: Computing results through Arrays\n\nby aaron_baugher (Curate)\n on Jun 24, 2015 at 10:46 UTC ( #1131783=note: print w/replies, xml ) Need Help??\n\nin reply to Re^10: Computing results through Arrays\nin thread Computing results through Arrays\n\nrobby_dobby already pointed out the actual mistake: you need to pass fnd_max() a reference to the hash, since that's what fnd_max() is expecting. See how I pass my hashes to set_column widths().\n\nNow beyond that: First, I'd use a more descriptive subroutine name, like \"max_value_of_hash\", and drop the prototype. Prototypes are advanced juju and shouldn't be used most of the time. Second, if you want to get the largest value from a hash, you don't need to access the keys at all. Here are some examples, starting with the simplest and wordiest:\n\n```#!/usr/bin/env perl\nuse 5.010; use strict; use warnings;\n\n# newbie but clean version\nsub max_value_of_hash {\nmy \\$h = shift;\nmy \\$max = 0;\nfor my \\$v (values %\\$h){\nif (\\$v > \\$max){\n\\$max = \\$v; # keep setting \\$max to larger value\n}\n}\nreturn \\$max;\n}\n\n# more perlish and elegant version\nsub max_value_of_hash2 {\nmy \\$h = shift;\nmy \\$max = 0;\n\\$_ > \\$max ? \\$max = \\$_ : undef for values %\\$h;\nreturn \\$max;\n}\n\n# let a module do it\nsub max_value_of_hash3 {\nuse List::Util qw(max);\nreturn max values %{\\$_};\n}\n\nmy %hash = ( a => 1, b => 2, c => 5, d => 3 );\nsay max_value_of_hash( \\%hash);\nsay max_value_of_hash2(\\%hash);\nsay max_value_of_hash3(\\%hash);\n\nAaron B.\nAvailable for small or large Perl jobs and *nix system administration; see my home node.\n\nReplies are listed 'Best First'.\nRe^12: Computing results through Arrays\nby [email protected] (Novice) on Jun 24, 2015 at 13:01 UTC\n\nSorry I was not clear with my requirement, also I made worst mistake in my code.\n\nMy requirement is to print the Maximum value the same way Average value printing works. So the assignment of values to hash array keys should be max value instead of sum of the values in that group.\n\n```while(<DATA>){\nnext unless /\\w/;\nmy(\\$server,\\$datetime,\\$database,\\$speed) = (split)[0,1,2,3];\nmy \\$ddhhmm = substr \\$datetime,0,16;\nmy \\$ddhh = substr \\$datetime,0,13;\n\\$h{\\$ddhh }{\\$database} += \\$speed;\n\\$m{\\$ddhhmm}{\\$database} += \\$speed;\n\\$db{\\$database} = 1;\n\\$sr{\\$server } = 1;\n}\n\nIs there a way to assign maximum value in this while loop shown above to \\$h{\\$ddhh }{\\$database} and \\$m{\\$ddhhmm}{\\$database} ??\n\nIf this is possible then I can follow the same procedure to print Max values the same way we did for Average.\n\nOne more doubt... All I did to find Average is to loop through the Hash Array one more time and assign the Average value to it as shown below\n\n```for my \\$key (sort keys %h){\nfor (@db) { \\$h{\\$key}{\\$_} = round(\\$h{\\$key}{\\$_} / (\\$count * 60))} ;\n}\n\nCan it be done efficiently without looping the Hash Array again ? I mean can this be done in while loop itself ?\n\nI may still not understand exactly what you're trying to do. But if you want to keep track of a max value for each time/database at the same time that you're keeping a total for the purpose of averaging, you could do something like this inside your while() loop:\n\n```\\$h{\\$ddhh }{\\$database}{total} += \\$speed;\n\\$m{\\$ddhhmm}{\\$database}{total} += \\$speed;\n\\$h{\\$ddhh }{\\$database}{max} = max((\\$h{\\$ddmm}{\\$database}{max} || 0),\n+ \\$speed);\n\\$m{\\$ddhhmm}{\\$database}{max} = max((\\$h{\\$ddmm}{\\$database}{max} || 0),\n+ \\$speed);\n\nYou can use the max() function from List::Util or write your own. Now later in the code where you used to access the total speed with \\$h{\\$timestamp}{\\$database}, you'll change that to access it as \\$h{\\$timestamp}{\\$database}{total}. And that makes room to keep track of the maximum value for each one in \\$h{\\$timestamp}{\\$database}{max}. Make sense?\n\nAaron B.\nAvailable for small or large Perl jobs and *nix system administration; see my home node.\n\nThanks, this is what exactly I was looking out for. Once again thanks a lot.\n\nCreate A New User\nNode Status?\nnode history\nNode Type: note [id://1131783]\nhelp\nChatterbox?\nand the web crawler heard nothing...\n\nHow do I use this? | Other CB clients\nOther Users?\nOthers cooling their heels in the Monastery: (7)\nAs of 2020-06-04 18:27 GMT\nSections?\nInformation?\nFind Nodes?\nLeftovers?\nVoting Booth?\nDo you really want to know if there is extraterrestrial life?\n\nResults (35 votes). Check out past polls.\n\nNotices?" ]
[ null, "http://promote.pair.com/i/pair-banner-current.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7699148,"math_prob":0.88336104,"size":1135,"snap":"2020-24-2020-29","text_gpt3_token_len":312,"char_repetition_ratio":0.120247565,"word_repetition_ratio":0.0,"special_character_ratio":0.3048458,"punctuation_ratio":0.13135593,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9595282,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-04T18:30:27Z\",\"WARC-Record-ID\":\"<urn:uuid:16be3fa2-f1c8-481a-9898-cf4ef0ec6b7f>\",\"Content-Length\":\"25615\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5342ef5f-e7a4-4126-ad70-3b2787f19124>\",\"WARC-Concurrent-To\":\"<urn:uuid:a993a419-030a-4fc4-8d55-f78f4040f94a>\",\"WARC-IP-Address\":\"66.39.36.186\",\"WARC-Target-URI\":\"http://qs321.pair.com/~monkads/?node_id=1131783\",\"WARC-Payload-Digest\":\"sha1:Q3S35E2ZMOD4HNOZYPRGATXM6IMXOBZE\",\"WARC-Block-Digest\":\"sha1:DAYW7Y7I2XEDJYZSS5VIRVLEMAO34BBO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347445880.79_warc_CC-MAIN-20200604161214-20200604191214-00313.warc.gz\"}"}
https://www.mdpi.com/2072-4292/11/8/923/htm
[ "Open AccessEditor’s ChoiceArticle\n\n# Estimation of Changes of Forest Structural Attributes at Three Different Spatial Aggregation Levels in Northern California using Multitemporal LiDAR\n\n1\nForest Engineering Resources and Management, College of Forestry, Oregon State University, 2150 SW Jefferson Way, Corvallis, OR 97331, USA\n2\nUS Forest Service Pacific Southwest Research Station, 3644 Avtech Parkway, Redding, CA 96002, USA\n3\nUS Forest Service Pacific Northwest Research Station, 3200 SW Jefferson Way, Corvallis, OR 97331, USA\n4\nUS Forest Service, Rocky Mountain Research Station, 1221 S Main St, Moscow, ID 83843, USA\n*\nAuthor to whom correspondence should be addressed.\nDeceased.\nRemote Sens. 2019, 11(8), 923; https://doi.org/10.3390/rs11080923\nReceived: 18 March 2019 / Revised: 10 April 2019 / Accepted: 12 April 2019 / Published: 16 April 2019\n\n## Abstract\n\nAccurate estimates of growth and structural changes are key for forest management tasks such as determination of optimal rotation times, optimal rotation times, site indices and for identifying areas experiencing difficulties to regenerate. Estimation of structural changes, especially for biomass, is also key to quantify greenhouse gas (GHG) emissions/sequestration. We compared two different modeling strategies to estimate changes in V, BA and B, at three different spatial aggregation levels using auxiliary information from two light detection and ranging (LiDAR) flights. The study area is Blacks Mountains Experimental Forest, a ponderosa pine dominated forest in Northern California for which two LiDAR acquisitions separated by six years were available. Analyzed strategies consisted of (1) directly modeling the observed changes as a function of the LiDAR auxiliary information ( $δ$ -modeling method) and (2) modeling V, BA and B at two different points in time, including a term to account for the temporal correlation, and then computing the changes as the difference between the predicted values of V, BA and B for time two and time one. We analyzed predictions and measures of uncertainty at three different level of aggregation (i.e., pixels, stands or compartments and the entire study area). Results showed that changes were very weakly correlated with the LiDAR auxiliary information. Both modeling alternatives provided similar results with a better performance of the $δ$ -modeling for the entire study area; however, this method also showed some inconsistencies and seemed to be very prone to extrapolation problems. The $y$ -modeling method, which seems to be less prone to extrapolation problems, allows obtaining more outputs that are flexible and can outperform the $δ$ -modeling method at the stand level. The weak correlation between changes in structural attributes and LiDAR auxiliary information indicates that pixel-level maps have very large uncertainties and estimation of change clearly requires some degree of spatial aggregation; additionally, in similar environments, it might be necessary to increase the time lapse between LiDAR acquisitions to obtain reliable estimates of change.\n\n## 1. Introduction\n\nLight detection and ranging LiDAR data have been extensively used in forest inventories to provide auxiliary information that is highly correlated with multiple forest structural attributes [1,2,3]. This strong correlation allows estimating forest structural attributes more efficiently than if only field measurements are available . In addition, the spatially explicit nature of LiDAR enables the mapping of forest attributes at fine resolutions (e.g., [2,5]). Accurate estimates of growth and structural changes are key for forest management as multiple management tasks such as determination of optimal rotation times, calculation of site indexes or the identification of areas experiencing difficulties in regeneration. Estimation of biomass is also key to quantifying greenhouse gas (GHG) emissions/sequestration, to comply with the International Panel on Climate Change (IPCC) reporting and good practice guidelines , and to develop a correct appraisal of forest resources for carbon markets. The extensively used area based approach (ABA) provides a way to estimate forest attributes at multiple levels ranging from single pixels to large areas using LiDAR auxiliary information . Availability of repeated LiDAR data acquisitions has opened the door to estimation of changes in forest structural attributes over time (e.g., [8,9]) using the ABA method.\nIn the ABA, the area under study is covered by a regular grid that will define a population of pixels or grid cells. In this approach, the field plots used to train models and the grid cells are of the same size, typically between 400 m2 and 900 m2. A direct application of predictive models will render predictions for grid units of size too small to be considered of interest for reporting in forest inventories. Areas of interest (AOIs) (i.e., the areas for which estimates are needed) are typically geographic units that can vary in size depending on the particular application. For worldwide inventories or inventories over continents or countries, AOIs are typically administrative or political units such as countries or municipalities. In forest management applications, AOIs are typically stands, compartments or even complete forests or landscapes. All these AOIs require spatial aggregation of grid units. However, validations of predictive models in the ABA literature are typically performed using global metrics of model fit, such as the sample-based root mean square error or bias, that provide average measures of uncertainty for predictions made for pixels or plots. These measures of uncertainty derived from the model fitting stage do not directly translate into measures of uncertainty for predictions for AOIs composed of multiple pixels (i.e., countries, municipalities, forests, stands, etc.). In addition, even when considering single pixels, they are not AOI-specific, as they only provide an average value, across the entire population, of the error that can be expected using a given model.\nThus, it is clear that uncertainty measures used as quality controls in forest inventories need to be made at the AOI-level and change estimation is not an exception. For large areas holding large sample sizes, AOI-specific estimates of means or totals and their measures of uncertainty can be obtained using direct estimators (e.g., [10,11,12]) that use only use sample data from the AOI under consideration. However, if the AOI sample sizes are not large enough to support direct estimates with reliable precision, then they must be regarded as small areas .\nSmall area estimation (SAE) techniques, especially empirical best linear unbiased predictors (EBLUPs) in combination with the ABA approach have been used to obtain estimates, and their corresponding measures of uncertainty for subpopulations such as municipalities , groups or management units and stands [4,14,16,17]. SAE techniques allow correcting the potential bias problems of synthetic predictions (i.e., predictions developed assuming that a general model developed at the population level holds for all subpopulations) and also permit reducing the large variance problems of direct estimators when AOIs sample sizes are small . In addition, while EBLUPs have been extensively used in SAE contexts, they can also be used to produce estimates for subpopulations or AOIs with large sample sizes and preserve important advantages over other methods. First, they allow obtaining model-unbiased estimates and their corresponding measures of uncertainty for all AOIs using a single model that explicitly considers potential variations between AOIs. This is a clear advantage over synthetic methods that assume that a certain relation derived for the entire population holds in all AOIs. A second advantage of EBLUPs is that it is possible to reduce the modeling effort required by direct model-based or model-assisted methods where a model is needed for each AOI. It is thus clear that SAE techniques in combination with LiDAR auxiliary information have potential applications in multiple forest inventories contexts. Unfortunately, to the best of our knowledge, all studies on SAE and forest inventories have focused on estimation of structural attributes at a given point in time, and little is known about: (1) their performance when applied to forest structure change estimation, and (2) about how these techniques compare to other methods used for estimation of changes in AOIs comprising entire populations [10,12,18,19] and especially subpopulations .\nIn this study, we analyzed the two most commonly used strategies to model changes in structural forest attributes using repeated LiDAR acquisitions, and analyzed their performance when used to obtain EBLUPs for AOIs of different size. The first strategy, referred hereafter as the $δ$-modeling method, considers the change, $δ$, over the time between LiDAR acquisitions as the model response. The second strategy, which we will call $y$-modeling method, focuses on modeling the structural attributes $y$, and their derived change over time. As a novelty, in the $y$-modeling method, the temporal correlation of both model errors and AOI random effects were taken into account. We considered changes in three structural variables, and AOIs at three different spatial aggregation levels in order to provide insights for future applications where estimates for an entire population and for subpopulations of different sizes are needed. Variables under study are standing volume (V), above ground biomass (B) and basal area (BA) and AOIs subject to analysis are (1) an entire forested area or landscape, (2) subpopulations that in this case are forest stands and (3) pixels as gridded maps are common output in mapping applications.\n\n## 2. Materials and Methods\n\n#### 2.1. Study Area\n\nThe study area is Blacks Mountains Experimental Forest (BMEF), a 3715 ha forest managed by the United States Forest Service, located northeast of Lassen National Park in northern California, USA (Figure 1). Elevation ranges from 1700 m to 2100 m above sea level. Slopes are gentle (<10%) on the lower parts of the forest and moderate (10%–40%) at higher elevations. Climate is Mediterranean with a certain degree of continentality, with dry summers and wet and cold winters when precipitation is in the form of snow. Average precipitation is 460 mm per year with monthly average temperatures that range from −9 °C to 29 °C. Soils are developed over basalts with depths that range from 1 to 3 m. Ponderosa pine (Pinus ponderosa Lawson & C. Lawson) dominated forest occupies the majority of the area. Incense cedar (Calocedrus decurrens (Torr.) Florin), white fir (Abies concolor (Gordon & Glend) Hildebr) and Jeffrey Pine (Pinus jeffreyi Grev & Balf.) are abundant accompanying species. Forest structure is relatively open and the canopy cover varies greatly within the forest (see Figure 1). A more detailed description of the study area can be found in [21,22].\n\n#### 2.2. Sampling Design and Field Data\n\nIn total, 106 forested stands were delineated in BMEF. Small non-forested patches were masked in the study area and hence were not considered part of the population under study. Out of the 106 forested stands, 24 were selected and sampled in the field. Nine of the remaining 82 unsampled stands were subject to thinning during the period between two available LiDAR acquisitions (i.e., 2009–2015) and all thinning operations were finished by fall 2011. These nine stands are located on the southwestern edge of BMEF and were analyzed separately because the sample of field plots used to train the LiDAR models did not include any stand subject to similar silvicultural interventions (Figure 1). Sampled stands come from a long-term research project initiated in BMEF in 1991 and, excluding the nine thinned stands, were representative of the forest structures and forest management treatments applied in rest of BMEF.\nSampled stands were subject to six different types of treatments resulting from crossing two different factors. The first factor is the structural diversity. It has three levels referred hereafter as low structural diversity (LoD), high structural diversity (HiD) and research natural areas, RNA, or controls. Low structural diversity stands are subject to thinning operations aiming to generate simplified single-strata structures. High diversity stands are subject to thinning where all canopy layers and age groups are preserved, resulting in a multi-storied forest structure with trees of different sizes and ages. Neither the HiD stands nor the LoD stands were subject to thinnings during the period between the two available LiDAR flights. Finally, RNA stands are not subject to any thinning or harvest operation. In total, 10 LoD, 12 HiD and two RNA stands were measured in the field. The second factor under consideration was the presence or absence of prescribed forest fires. Half of the LoD, HiD and RNA stands sampled in the field had been subject to prescribed fires, but only one of the RNA stands was subject to prescribed fires during the period 2009–2015.\nA sample of 151, 16 m radius plots (804 m2) were measured in the field during the summer of 2009 and then remeasured during the summer of 2016. All field plots were located on nodes of the 100 m by 100 m grid of monumented markers at BMEF. Coordinates of the makers were determined using traverse methods and survey grade GPS observations and have an accuracy of 15 cm or better (see ). For each of the 26 stands selected for sampling, a node of the BMEF 100 m grid was randomly selected and used as a starting node for a 282 m by 282 m grid formed by selecting every other plot of the 100 m grid moving in the diagonal directions. Field measurements were taken on the nodes of the 282 m by 282 m grid (see Figure 1).\nWithin each field plot all live trees with DBH larger than 9 cm, and all dead standing trees with DBH larger than 12 cm, were stem mapped and measured for DBH and height. Plot basal area (BA) was derived directly from the field measurements. Volume (V) and above ground biomass (B) were computed as the sum of the individual tree volumes and biomasses of all standing trees. Individual tree volumes and biomasses were estimated using species-specific allometric models included in the national volume estimation library (NVEL) and in the national biomass estimation library (NBEL). To account for the one-year difference between acquisition of field measurements in 2016, and the second LiDAR data acquisition obtained in 2015; plot-level values of the variables under analysis were computed for 2015 by linearly interpolating between the values obtained for 2009 and 2016. Finally, for each field plot we computed the change of V, B and BA on a per year basis, as the difference of the plot-level values in 2009 and 2015 divided by 6. For two plots close to the southeastern boundary of the forest, changes in V were extremely large, more than three standard deviations away from the mean value for the change in volume. These anomalous plots were removed from the analysis because such large changes seemed to be derived from edge effects. Plot-level values for 2009, 2016 and per year increments for the period, 2009-2015, for V, B and BA, in the remaining 149 plots are summarized in Table 1.\nFor the nine unsampled stands thinned during the period 2009–2015 all thinning operations were completed by fall 2011. In total 427.40 hectares were thinned with prescriptions that varied among stands. Approximately 80% of the area was thinned from below, leaving a residual basal area of 17.22 m2 ha−1 to 25.25 m2 ha−1. For the remaining 20% of the area, approximately one quarter was not thinned while the other three quarters were thinned to a residual BA that ranged from 6.89 m2 ha−1 to 13.77 m2 ha−1. Fresh weight of total extractions for the 427.40 hectares subject to thinning was 11,009.38 Mg of logs and 23,164.32 Mg of chipped material.\n\n#### 2.3. LiDAR Data Acquisitions\n\nTwo LiDAR acquisitions are available for BMEF. The first LiDAR dataset was acquired during the summer of 2009 using a Leica ALS 50 discrete return sensor. Flying altitude was 900 m, side-lap between adjacent flight lines was at least 50% and scanning angle was ±14°. The LiDAR data vendor generated digital terrain models (DTMs) with an accuracy of 15 cm at 95% confidence level. Additional details on the LiDAR data collection for the 2009 acquisition can be found in . The same vendor in the study area performed a second LiDAR acquisition during the summer of 2015 using the same sensor, flying altitude and side-lap specifications. DTMs were also created for 2015 by the vendor.\nFour sets of auxiliary variables were considered in this study. The first two sets are composed of 42 LiDAR predictors computed for each acquisition date. Set 1 will represent the predictors for 2009 and Set 2 the predictors for 2015. These predictors are descriptors of the point cloud height distributions and were all relative quantities to avoid introducing noise due to local differences in the point cloud densities of 2009 and 2015 . The third set of predictors, Set 3, was computed as the differences between the 2009 and the 2015 LiDAR predictors. Finally, the fourth set of predictors, Set 4, included the incoming solar radiation computed using the Environmental Systems Research Institute (ESRI) ArcGIS Area Solar Radiation tool with the 2009 digital surface model (DSM) as input; and two treatments: (1) single- or multi-story structural diversity and (2) presence or absence of prescribed fires. All predictors were computed for each field plot and for a grid with a cell size of 805 m2 covering the entire BMEF. The cell size matched the field plot size and each cell of the grid was considered a population unit, equivalent to the field plots. Predictors and their corresponding acronyms used in further sections are summarized in Table A1.\n\n#### 2.4. AOIs, Target Parameter and Overview of Modelling Strategies\n\nTwo different types of subsets of population units will be repeatedly used throughout the manuscript in remaining sections. These subsets and their corresponding notation are: the sample of plots measured in the field, denoted using sub-index $s$ and the target AOIs represented by pixels, denoted using sub-index $α$.\nThree different groups of AOIs representing different levels of spatial aggregation were analyzed. The first group represents the largest level of spatial aggregation and represents the entire population under study. Within this group, we considered the set of all sampled stands, SS, and the entire BMEF study area after removing the nine thinned and unsampled stands, SA (i.e., sampled and unsampled but not thinned stands). The second group consists of the 106 forested stands in BMEF. In this group, we considered separately the unsampled and thinned stands (nine stands), unsampled and not thinned stands (73 stands) and sampled and not thinned stands (24 stands). Finally, the third group is the set of all pixels of the LiDAR grid covering the forested area in BMEF.\nThe main objective of this study was to analyze AOI-specific estimates of the change between 2009 and 2015 for three different structural variables, (V, B and BA). We will use the generic term variable of interest and the letter $y$ to refer to the forest structural variables, and the term target parameter and Greek letter $Δ$ to refer to the quantities that we seek to estimate. Hereafter, target parameters will always refer to changes over time for the totals of the variables of interest in the considered AOIs, and will be expressed in a per hectare and year basis.\nConsidering that all pixels have the same area, the target parameter $Δ α$ for a generic AOI or subset of population units, $α$, can be expressed as:\n$Δ α = ∑ i = 1 N α K y α ( y i α 15 − y i α 09 ) = ∑ i = 1 N α K δ α δ i α ,$\nwhere $N α$ is the number of population units (i.e., pixels) in the AOI. The terms $y i α 15$ and $y i α 09$ respectively represent the value of the variable of interest for 2009 and 2015 for the $i t h$ population unit of $α$, and $δ i α$ is the change, for the $i t h$ pixel of $α$, in the variable of interest during the period 2009 to 2015. Finally, for comparability with previous studies, the variables of interest will be expressed in a per unit area basis, and the increments $δ i α$ will be expressed in a per unit area and year basis. Thus, to ensure that $Δ α$ is expressed in the correct units, it is necessary to introduce the factors $K y α$ and $K δ α$. When $y i α 15$ and $y i α 09$ are expressed in a per unit area basis $K y α = 1 6 N α$ and for $δ i α$ in a per unit area and year basis $K δ α = 1 N α$.\nWe calculated AOI estimates using two different methods. The first, $δ$-modeling method, for estimation of change uses models similar to those in approach A5 of Poudel et al. . In this approach, the change in a structural variable at the plot/pixel-level ($δ i α$) is directly modeled as a function of the LiDAR auxiliary variables available for the study area. The second, the $y$-modeling method, uses a modified version of approach A4 of Poudel et al. to obtain AOI-specific estimates of change. Models in this approach jointly relate structural variables ($y α 15$ and $y α 09$) and LiDAR auxiliary information at a given point in time and account for the correlation between errors obtained for the same plot/pixel at different times. For both methods, variability between stands was accounted by considering them as small areas. Thus, stand-level random effects were included in the models.\n\n#### 2.5.1. Model $δ$-modeling Method\n\nModels in the $δ$-modeling method relate the change (per year) of the variable of interest in a population unit to the auxiliary variables for the population unit. To indicate that these models consider change in the variables of interest directly, model parameters, stand-level random effects and model errors will include the subscript $δ$. Three different types of auxiliary variables were considered as potential predictors in the $δ$-modeling method. First, changes in the LiDAR auxiliary variables for the period 2009-2015, Set 3, were considered following Poudel et al. as changes in LiDAR predictors are expected to correlate with growth or changes in forest attributes. Forest structure relates to growth. Thus, the LiDAR auxiliary variables for 2009, Set 1, were also considered as potential predictors that act as proxies for forest structure at the beginning of the period 2009-2015. Finally, the incoming solar radiation and the structural diversity factors and presence of prescribed fires, Set 4, were also considered as potential predictors.\nFor the $j t h$ population unit in the $i t h$ stand, models of the $δ$-modeling method have the form:\n$δ i j = x δ i j t β δ + v δ i + ε δ i j ,$\nwhere $t$ indicate the transpose operator and $x δ i j t$ is a vector of auxiliary variables in which the first element takes the value 1 for the intercept. The term $β δ$ is a vector of model coefficients where the first element is the intercept of model (2). Selection of auxiliary variables included in the model was performed using the method described in . Stand-level random effects $v δ i$ are assumed to be independently and identically distributed (i.i.d.) normal random variables $v δ i ∼ N ( 0 , σ δ v 2 )$ for all $i = 1 , … , D$, where $D$ is the total number of stands in the study area. Model errors are i.i.d. normal random variables $ε δ i j ∼ N ( 0 , σ δ ε 2 )$ independent of the stand-level random effects (i.e., $C o v ( ε δ i j v δ , k ) = 0$, for all $i , j$ and $k$). Models with spatially correlated errors and with non-constant error variances were initially considered but discarded in the model selection stage, as they were not found to be significant (see Section 2.5.3).\nFor a generic set of population units denoted by subscript $ξ$ (which can represent either $s$ or $α$), the relation in matrix notation between vector of changes of structural variables $δ y ξ$, and the auxiliary variables included in the model ($X δ ξ$), is expressed as:\n$δ ξ = X δ ξ β δ + Z δ ξ v δ + ε δ ξ ,$\nwhere $δ ξ = ( δ 1 , … , δ N ξ ) t$, with $δ k$ being the yearly change for the forest structural variable $y$, in the $k t h$ unit of $ξ$, and $N ξ$ is the number of elements in the set $ξ$. The $k t h$ element of $ξ$ will be an element of a given stand. To explicitly indicate this membership we will use, when necessary, the sub-indexes $i t h$ and $j t h$ to respectively indicate the stand and index of the element within the stand. The $k t h$ row of the matrix $X δ ξ$ is $x δ k t$. The vector $v δ = ( v δ 1 , … , v δ D ) t$ is a vector of stand-level random effects with variance covariance matrix $G δ = σ δ v 2 I D$, where $I D$ is the identity matrix of dimension $D$. The matrix $Z ξ δ$ is a $N ξ$ x $D$ incidence matrix that describes stand membership for each population unit. The $r t h$ row of $Z ξ δ$ have zeros at all positions except at position $i$, where $i$ is the index of the stand to which the $k t h$ unit of $ξ$ belongs. Finally, $ε ξ δ$ is a vector of $N ξ$ model errors with diagonal variance covariance matrix $R ξ δ = σ δ ε 2 I N ξ$.\nTo simplify the notation, hereafter , will represent the vector of variance parameters. The variance covariance matrix of $δ ξ$ is:\n$V δ ξ ( θ δ ) = Z δ ξ G δ ( θ δ ) Z δ ξ t + R δ ξ ( θ δ ) .$\nModel (3) is a linear mixed effect model and a special case of the basic unit-level described in (pp. 174).\n\n#### 2.5.2. Target Parameter $δ$-modeling Method\n\nUnder model (2) the target parameter (1) for a generic AOI $α$ can be expressed as:\nThus, the target parameter is a linear target parameter similar to the one considered in where $1 t$ is a vector of ones and , and $q δ α t = 1 N α 1 t$ are vectors of known constants for the target AOI $α$.\n\n#### 2.5.3. Model Selection and Estimator $δ$-modeling Method\n\nThe target parameter $Δ α$ was estimated for all considered AOIs using $Δ ^ α$ the empirical best linear unbiased predictor (EBLUP) described in . For each variable of interest, auxiliary variables included in $X δ α$ were preselected using the best subset selection procedure described in (pp. 179–180). When models with similar values of model root mean square error or coefficients of determination were compared, the preferred option was to select the model with smallest values of $σ δ v 2$. This criterion is appropriate to minimize the leading term of the AOI specific mean square errors (pp. 176). Pre-selected models considered constant model error variances and no spatial correlation of model errors were fitted using maximum likelihood (ML). In a subsequent stage, models were re-fitted using ML including: (1) an exponential spatial correlation model for the model errors and (2) a non-constant error variance where $ε δ i j ∼ N ( 0 , σ δ ε 2 k i j 2 w δ )$. The term $k i j$ is the value of the predictor included in the model most correlated to $δ$ and $w δ$ is an additional parameter to account for heteroscedasticity. For all variables, no clear patterns of spatial correlation or non-constant variances were observed, which supports the model form described in Section 2.5.1.\nFinal estimates $θ ^ δ$ of the variance parameters $θ δ$ were obtained using restricted maximum likelihood (REML) with the R package nlme . REML estimates $β ^ δ ( θ ^ δ$) of $β δ$ were functions of the estimated variance parameters (6):\n$β ^ δ ( θ ^ δ ) = { X δ s t V ^ δ s ( θ ^ δ ) − 1 X δ s } − 1 X δ s t V ^ δ s ( θ ^ δ ) − 1 δ s .$\nMatrices $V ^ δ s ( θ ^ δ )$, $G ^ δ ( θ ^ δ )$ and $R ^ δ s ( θ ^ δ )$ are obtained replacing the estimated variance parameters $θ ^ δ$ in $V δ s ( θ δ )$ $G δ ( θ δ )$ and $R δ s ( θ δ )$, by their REML estimates $θ ^$. EBLUPs $Δ ^ α$ are also functions of $γ ^$ and are obtained using Equation (7):\n$Δ ^ α ( θ ^ δ ) = l δ α t β ^ δ ( θ ^ δ ) + m δ α t v ^ δ ( θ ^ δ ) ,$\nwhere $v ^ δ ( θ ^ δ )$ equals:\n$v ^ δ ( θ ^ δ ) = G ^ δ ( θ ^ δ ) Z δ s t V δ s ( θ ^ δ ) − 1 { δ s − X δ α β ^ δ ( θ ^ δ ) } .$\nIt is important to note that for AOIs in unsampled stands (i.e., pixels in unsampled compartments or the unsampled stands themselves), estimation will be made assuming that the model fit for the sampled stands also holds for the unsampled stands. Under that assumption, $m δ α t v ^ δ ( θ ^ δ ) = 0$ and $Δ ^ α ( θ ^ δ ) = l δ α t β ^ δ ( θ ^ δ )$ is a synthetic predictor.\n\n#### 2.5.4. MSE Estimators for the $δ$-modeling Method\n\nFor all AOIs, the mean squared error of the EBLUP was estimated using the estimator provided by and extended in to account for the fact that AOIs can contain a small number of population units. This estimator is the sum of three components where the last one, $2 g 3 , α ( θ ^ δ )$, is a bias correction factor:\n$M S E ^ { Δ ^ δ α ( θ ^ δ ) } = g 1 δ α ( θ ^ δ ) + g 2 δ α ( θ ^ δ ) + 2 g 3 , α ( θ ^ δ ) ,$\nThe first term of (9) equals:\n$g 1 δ α ( θ ^ δ ) = m δ α t { G ^ ( θ ^ δ ) − G ^ ( θ ^ δ ) Z δ s t V ^ δ α ( θ ^ δ ) − 1 Z δ s G ^ ( θ ^ δ ) } m δ α + q δ α t R δ α ( θ ^ δ ) q δ α .$\nThe second term of (9) is:\n$g 2 α δ ( θ ^ δ ) = d δ α t { X δ s t V δ s ( θ ^ δ ) − 1 X δ s } − 1 d δ α$\nwith $d δ α t = l δ α t − m δ α t G ^ ( θ ^ δ ) Z δ s t V ^ δ s ( θ ^ δ ) − 1 X δ s$. The term $g 1 δ α ( θ ^ δ )$ of $M S E ^ { Δ ^ δ α ( θ ^ δ ) }$ accounts for the uncertainty due to the estimation of the random effects while $g 2 δ α ( θ ^ δ )$ accounts for the uncertainty due to estimating $β δ$.\nFor model (2), it is possible to compute a bias correction factor for the mean square error estimator, that accounts for the uncertainty due to estimating $θ δ$. This correction factor equals:\n$g 3 δ α ( θ ^ δ ) = t r { ( ∂ b δ α t ∂ θ δ | θ ^ y ) V δ s ( θ ^ δ ) − 1 ( ∂ b δ α t ∂ θ δ | θ ^ δ ) t V ¯ δ s ( θ ^ δ ) } ,$\nwhere, $b δ α t = m δ α t G ( θ δ ) Z δ s t V δ s ( θ δ ) − 1$ and $H ¯ δ s ( θ ^ )$ is the Fisher information matrix for the fitted model. Explicit formulas for $g 3 δ α ( θ ^ δ )$ are provided (pp. 179–180). This bias correction factor was used as a reference in comparisons with the $y$-modeling method.\nAll estimators of the mean square errors for AOIs in unsampled stands were made assuming that the model fitted for the sampled stands holds in the unsampled stands. Under this assumption, the leading term $g 1 δ α ( θ ^ δ )$, of $M S E ^ { Δ ^ δ α ( θ ^ δ ) }$ will be larger than if the stand containing the AOI was sampled. This occurs because the negative term $m δ α t G ^ ( θ ^ δ ) Z δ s t V ^ δ s ( θ ^ δ ) − 1 Z δ α G ^ ( θ ^ δ ) m δ α$ makes the term $g 1 δ α ( θ ^ )$ smaller as the stand sample size increases.\n\n#### 2.6.1. Model $y$-modeling Method\n\nModels in the $y$-modeling method relate the forest structural variables in a population unit at different points in time with the auxiliary variables for that population unit. To indicate that these models directly consider the variables of interest, model parameters, stand-level random effects and model errors will include the subscript $y$. Auxiliary variables considered in the $y$-modeling method include the LiDAR auxiliary variables for 2009 and 2015, (i.e., Set 1 and Set 2, respectively) plus the incoming solar radiation and the factors structural diversity and presence of prescribed fires, Set 4 for both 2009 and 2015. The modeling for the method started obtaining models for the variable of interest for 2009 and models for the variable of interest in 2015.\nFor given time $t$, the variable of interest in the $j t h$ population unit in the $i t h$ stand is expressed as:\n$y i j t = x y i j t t β y t + u y i t + e y i j t ,$\nwhere $x y i j t t$ is a vector of auxiliary variables, specific for time $t$, in which the first element takes the value 1. The term $β y t$ is a vector of time-specific coefficients with the first element representing the model intercept. The random components of model (13) are the stand-level random effects $u y i t$ and the model errors $e y i j t$. To account for heteroscedasticity, model errors $e y i j t$ were of the form $e y i j t = ε y i j t k i j t ω y t$ with $ε y i j t ∼ N ( 0 , σ y ε t 2 )$; the term $k i j t$, the predictor included in the model for time $t$, is most correlated to $y t$, and $ω y t$ is a parameter to model the change in the error variance. The stand-level random effects $u y i t$ were assumed to be independently and identically distributed (i.i.d.) normal random variables $u y i t ∼ N ( 0 , σ y u t 2 )$ for all $i = 1 , … , D$, where $D$ is the total number of stands in the study area. Model errors were assumed independent of the stand-level random effects (i.e., $C o v ( e y i j t , u y k t ) = 0$, for all $i , j$ and $k$). Finally, model errors were consider independent with $C o v ( ε y i j t , ε y k l t ) = 0$ if $i ≠ k$ or $j ≠ l$ for both $t = 2009$ and $t = 2015$. Models with spatially correlated errors were initially considered, but discarded for both years in the selection stage as no clear spatial correlation patterns were observed in the residuals. Auxiliary variables included in the model were selected following the same procedure used in the $δ$-modeling method, using the best subsets selection procedure described in .\nTo account for expected correlations, models for 2009 and 2015 were combined into a single model where stand-level random effects and model errors for 2009 and 2015 were allowed to be time correlated. Then for the $j t h$ population unit in the $i t h$ stand the two-dimensional vector $y i j = ( y i j 09 , y i j 15 ) t$ of variables of interest was related to the auxiliary variables through the following model:\n$y i j = X i j β y + B i j v y i + e y i j$\nwith:\nwhere $0 p 2009 t$ and $0 p 2015 t$ are, respectively, row vectors of zeroes of dimensions equal to $x y i j 09 t$ and $x y i j 2015 t$.\nAs with the time-specific models, to account for heteroscedasticity in the combined model, model errors $e y i j t$ were of the form $e y i j t = ε y i j t k i j t ω y t$ with $ε y i j t ∼ N ( 0 , σ y ε t 2 )$. The parameters $ω y t$ were updated when fitting the combined model. Spatial correlation of model errors was not found to be significant when considering each year separately, therefore, no spatial correlation patterns were considered in the combined model. The only source of correlation of model errors present in the combined model was temporal correlation. For a given location, the variables $ε y i j t y i j 09 ∼ N ( 0 , σ y ε 09 2 )$ and $ε y i j 15 ∼ N ( 0 , σ y ε 15 2 )$ were allowed to be correlated random variables. The correlation between $ε y i j 2009$ and $ε y i j 2015$ is $ρ ε$ and the variance-covariance matrix of $e y i j$ is:\n$C o v ( e y i j 09 e y i j 15 ) = R y i j = ( σ y ε 09 2 k i j 09 2 ω y 09 ρ ε σ 09 k i j 09 ω y 09 σ 09 k i j 15 ω y 15 ρ ε σ 09 k i j 09 ω y 09 σ 09 k i j 15 ω y 15 σ y v 2 k i j 15 2 ω y 15 ) .$\nTo model correlation between stand-level random effects, three random components $v y i$, $v y i 2009$ and $v y i 2015$, independent of each other, were considered. These random components had distributions $v y i ~ N ( 0 , σ y v 2$), $v y i 09 ~ N ( 0 , σ y v 09 2 )$ and $v y i 15 ~ N ( 0 , σ y v 15 2 )$. Stand-level random effect for a given point, at time $t$, $u y i t$, are the sum of a pure stand effect, independent of time $t$, $v y i$, and a time-specific stand random effect $v y i 09$ or $v y i 15$. The term $B i j v y i = u y i = ( v y i + v y i 09 v y i + v y i 15 ) = ( u y i 05 u y i 09 )$ represents these sums. The variance covariance matrix of $v y i$ is diagonal, therefore, the variance covariance matrix of $u y i$ is:\n$C o v ( u y i 09 u y i 15 ) = G y i = ( σ y v 2 + σ y v 15 2 σ y v 2 σ y v 2 σ y v 2 + σ y v 15 2 )$\nThe fact that the random effect $v y i$ is present for both 2009 and 2015 results in a positive correlation of the terms of $u y i$, with a correlation coefficient $ρ u = σ y v 2 ( σ y v 2 + σ y v 09 2 ) ( σ y v 2 + σ y v 15 2 )$. In a last step, models with a simpler structure of random effects were fitted and compared to the original models using a likelihood ratio test. Simplified models contained only random effects $v y i$ that did not depend on time (i.e., models did not contain time-specific random effects $v y i 09$ and $v y i 15$). For simplified models $u y i 09 = u y i 15 = v y i$.\nFor a generic set of population units $ξ$, the combined model can be expressed in matrix notation as:\n$y ξ = X y ξ β y + Z y ξ v y + e y ξ ,$\nwhere $y ξ$, $e y ξ ,$ and $X y ξ$, are obtained stacking the vectors $y i j$, $e y i j ,$ or the matrices $X i j$ of all units in $ξ$. As no spatial correlation patterns were found, the variance covariance matrix of $e y ξ$ is, $R y ξ = d i a g i , j ∈ ξ ( R y i j )$, a block diagonal matrix of dimension $2 N ξ x 2 N ξ$ with $2 x 2$ blocks equal to $R y i j$. The vector of stand-level random effects and the matrix $Z y ξ$ is an incidence matrix of dimension $2 N ξ x D$ for the simplified models and $2 N ξ x 3 D$ for the models with time-specific random effects. The variance covariance matrix of $y ξ$ can be expressed as:\n$V y ξ ( θ y ) = Z y ξ G y ( θ y ) Z y ξ t + R y ξ ( θ y ) .$\nIn Equation (19), it is explicitly indicated that matrices $V y ξ ( θ y )$, $G y ( θ y )$ and $R y ξ ( θ y )$ depend on the vector of variance-covariance parameters $θ y = ( σ y v 2 , σ y v 09 2 , σ y v 15 2 , σ y ε 09 2 , σ y ε 15 2 , ρ ε ) t$. For the models with simplified random effects the vector of variance covariance parameters reduces to $θ y = ( σ y v 2 , σ y ε 09 2 , σ y ε 15 2 , ρ ε ) t$. Model (18) is a special case of linear mixed effect model with block diagonal covariance structure.\n\n#### 2.6.2. Target Parameter $y$-modeling Method\n\nUnder model (18) the target parameter (1) for a generic AOI, $α$ is a linear combination of the form:\n$Δ α = 1 6 N α ∑ i = 1 N α ( y i α 15 − y i α 09 ) = l y α t β δ + m y α t u y + q y α t e y α ,$\nwhere $l y α t = q y α t X y α$, $m y α t = q y α t Z y α$ and $q y α t$ are vectors of known constants for the target AOI $α$, with $q y α t$ a vector of dimension $2 N α$ where the $k t h$ element equals $( − 1 ) k 6 N α$. It is important to remark that for models with a simplified structure of stand random effects, the target parameters do not depend on $u y$. For these models, $y i j 15 − y i j 15 = ( x y i j 15 t β y 15 − x y i j 09 t β y 09 ) + ( e y i j 15 − e y i j 09 )$, and $u y i 09 − u y i 15 = v y i − v y i = 0$. For these type of models, one can expect significant gains in accuracy because it is not necessary to estimate random effects.\n\n#### 2.6.3. Estimator $y$-modeling Method, and Estimator of the MSE\n\nModel (18) is a linear mixed effects model with block diagonal structure and $Δ α$ a linear model parameter; thus, after (pp. 108–110), the EBLUP $Δ ^ y ( θ ^ y )$ of $Δ α$ is:\n$Δ ^ y α ( θ ^ y ) = l y α t β ^ y ( θ ^ y ) + m y α t v ^ y ( θ ^ y ) ,$\nwhere $β ^ y ( θ ^ y$) equals:\n$β ^ y ( θ ^ y ) = { X y s t V ^ y s ( θ ^ y ) − 1 X y s } − 1 X y s t V ^ y s ( θ ^ y ) − 1 y s .$\nMatrices $V ^ y s ( θ ^ y )$, $G ^ y ( θ ^ y )$ and $R ^ y s ( θ ^ y )$ are obtained by replacing the estimated variance parameters $θ y$ in $V y ξ ( θ y )$, $G y ( θ y )$ and $R y ξ ( θ y )$, by their REML estimates $θ ^ y$. EBLUPs $Δ ^ α$ are also functions of $γ ^$ and are obtained using formula (7), where $v ^ δ ( θ ^ δ )$ equals:\n$v ^ y ( θ ^ y ) = G ^ y ( θ ^ y ) Z y s t V y s ( θ ^ y ) − 1 { y s − X y α β ^ y ( θ ^ y ) } .$\nAs with the $δ$-modeling method, estimates for AOIs in unsampled stands were made assuming that the model fit for the sampled stands also applied in the unsampled stands, which leads to $m y α t v ^ y ( θ ^ y ) = 0$ and $Δ ^ y α ( θ ^ y ) = l y α t β ^ y ( θ ^ y )$ is a synthetic predictor.\nFor all AOIs, the estimator of the mean square error of the EBLUP under the $y$-modeling method, $M S E ^ { Δ ^ y α ( θ ^ y ) }$, is:\n$M S E ^ { Δ ^ y α ( θ ^ y ) } = g 1 y α ( θ ^ δ ) + g 2 y α ( θ ^ y ) .$\nThe terms $g 1 y α ( θ ^ y )$ and $g 2 y α ( θ ^ y )$ in (24) are analogous to those in (10) and (11) and have similar interpretation. To compute $g 1 y α ( θ ^ y )$ and $g 2 y α ( θ ^ y )$, matrices $G ^ δ s ( θ ^ δ )$, $R ^ δ s ( θ ^ δ )$, $V ^ δ s ( θ ^ δ )$ and $R ^ δ α ( θ ^ δ )$ must be replaced by $G ^ y s ( θ ^ y )$, $R ^ y s ( θ ^ y )$, $V ^ y s ( θ ^ y )$ and $R ^ y α ( θ ^ y )$. For the $y$-modeling method we did not compute the second-order correction factors.\n\n#### 2.7. Comparison of Methods\n\nMethods were compared using three different criteria. First, we used general measures of accuracy providing the average error or uncertainty of prediction at the pixel-level (2.7.1); then, we compared methods using AOI-specific estimates and measures of uncertainty (2.7.2). Finally, we assessed the risk of generating biased predictions when using the $δ$-modeling method and $y$-modeling method in unsampled stands (Section 2.7.3).\n\n#### 2.7.1. General Accuracy Assessment\n\nTo compare the $δ$-modeling method and $y$-modeling method, a fist assessment was made using the cross-validated model mean squared error, $m R M S E$, and the model bias $m B i a s$:\n$m R M S E = ∑ i , j ∈ s n ( δ i j − δ ^ i j ) 2 n ,$\n$m B i a s = ∑ i , j ∈ s ( δ i j − δ ^ i j ) n ,$\nwhere $δ i j$ is the observed value of change for the $j t h$ plot included in the $i t h$ sampled stand and $δ ^ i j$ is the predicted value for that plot when model coefficients are obtained removing that plot from the training dataset. For the $y$-modeling method $δ ^ i j$ is obtained using the observed and fitted values of the variable of interest, as $δ ^ i j = 1 6 ( y ^ i j 15 − y ^ i j 09 )$ where $y ^ i j 09$ and $y ^ i j 15$ are the predictions of $y i j 09$ and $y i j 15$ are obtained fitting the corresponding $y$-model without the observations for plot $i j$. In addition, we computed $m R M S E$ and $m B i a s$ in terms relative to the average changes observed in the sampled plots. These quantities are denoted as $m R R M S E$ = $m R M S E / Δ ^ f$ and $m R B i a s = m B i a s / Δ ^ f$ where $Δ ^ f$ is the mean of the changes observed in the field plots.\n\n#### 2.7.2. AOI-specific Comparisons.\n\nFor each of the considered areas of interest an estimate by each method (i.e., EBLUPs using either $Δ ^ δ α ( θ ^ y )$ or $Δ ^ y α ( θ ^ y )$) and their corresponding mean square error estimators (i.e., $M S E ^ { Δ ^ δ α ( θ ^ y ) }$ or $M S E ^ { Δ ^ y α ( θ ^ y ) }$) were available. First, for each AOI and method, we directly compared $Δ ^ δ α ( θ ^ y )$ and $Δ ^ y α ( θ ^ y )$, and the square roots of $M S E ^ { Δ ^ δ α ( θ ^ δ )$ and $M S E ^ { Δ ^ y α ( θ ^ y )$. To simplify the notation, we will omit the subscript indicating the target AOI unless it is necessary and refer to $Δ ^ δ$ as $Δ ^ y$. Similarly, after omitting the subscript $α$, the AOI specific root mean square errors will be denoted as:\n$R M S E δ = M S E ^ { Δ ^ δ α ( θ ^ δ ) } ,$\n$R M S E y = M S E ^ { Δ ^ y α ( θ ^ y ) } ,$\nTo perform an assessment relative to the predicted values the following coefficient of variation:\n$C V { Δ ^ δ α ( θ ^ δ ) } = M S E ^ { Δ ^ δ α ( θ ^ δ ) } Δ ^ δ α ( θ ^ δ ) ,$\n$C V { Δ ^ y α ( θ ^ y ) } = M S E ^ { Δ ^ y α ( θ ^ y ) } Δ ^ y α ( θ ^ y ) ,$\nwas computed for each AOI and method. Finally, for each AOI we compared $C V { Δ ^ δ α ( θ ^ y ) }$ to $C V { Δ ^ y α ( θ ^ y ) }$. To simplify the notation we will refer to these coefficients of variation as $C V δ$ and $C V y$. Finally, for each sampled AOI we computed, using only the field information, the sample mean $Δ ^ f α$ and its standard error:\n$S E f α = ∑ i , j ∈ s n α ( δ i j − Δ ^ f , α ) 2 ( n α − 1 ) n α ,$\nand its coefficient of variation $C V f α$. In Equation (31), $n α$ is the number of field plots in the considered AOI and the sub-index $f$ is used to indicate that these quantities are calculated using only field data. Again, to simplify the notation we removed the sub-indexes $α$ unless they were necessary. Finally, the sample mean and the coefficient of variation were then compared to their counterparts (7) and (29) and (21) and (30) obtained by the $δ$-modeling method and $y$-modeling method, respectively.\n\n#### 2.7.3. Extrapolation to Thinned Stands\n\nThe fact that thinned stands were not represented in the sample of field plots raises the question of how applicable the models obtained are using either the $δ$-modeling method or the $y$-modeling method to these stands. Applying the models to these stands involves a degree of extrapolation to a different population and a high risk of producing biased predictions. We assessed this risk by comparing the distributions of the LiDAR predictors included in the models for the $δ$-modeling method and the $y$-modeling method for the sample of field plots to the distributions of the predictors in: (1) the sampled stands, (2) the unsampled and not thinned stands and (3) unsampled and thinned stands. Within each group (i.e., field plots, FP; sampled stands not thinned, SS; unsampled stands not thinned, UN; and unsampled stands subject to thinning, UT), we estimated density functions for each LiDAR predictor using a Gaussian kernel and a bandwidth determined using Silverman’s rule . Note that we considered two AOIs for the largest level of aggregation, the first one is SS and the other one, SA, is the union of SS and UN. We first considered each predictor separately and graphically compared their density functions. Predictors for 2009 and 2015 in the $y$-modeling method were considered separately. For each predictor and group, we computed the area of overlap, $A O$, with the density function for the field plots which takes value 0 if there is no overlap and value 1 if the distribution of the predictor in the considered group equals the distribution for the sample.\nIn addition to the area of overlap and aiming to consider all predictors in a given model at once, we calculated $N T 2 ¯$, the average of Mesgaran’s novelty index $N T 2$ for each model and group . This quantity provides the average Mahalanobis distance from the pixels of the group to the mean of the sample of field plots, and it is expressed in terms relative to the maximum Mahalanobis distance observed in the sample. Values of $N T 2 ¯$ above one indicate that on average pixels in a group are at a distance to the mean of the field plots larger than the distance from the extreme field observation to the mean of the field plots. We also calculated $N T 2 ¯ m e a n$, the average of $N T 2$, but using the mean Mahalanobis distance as normalizing constant instead of the maximum. The reference value of one for $N T 2 ¯ m e a n$ indicates that the average Mahalanobis distance from pixels, to the mean of the field plots, is the same as the average of the Mahalanobis distances observed in the sample. Means and variance covariance matrices for computation of Mahalanobis distances are always estimated using the sample of field plots.\n\n## 3. Results\n\n#### 3.1. Selected Models $δ$-modeling Method and $y$-modeling Method\n\nSelected models for the $δ$-modeling method included auxiliary variables from Set 1 and Set 3. It was possible to find alternative models including fixed effects for the diversity treatments (i.e., predictors from Set 4) with similar values of $m R M S E$ and $m R B i a s$; however, those models did not improve the model fit. From a practical point of view, models that only depend on the LiDAR variables but do not depend on the structural diversity treatments or the presence/absence of prescribed fires make the models more portable and applicable to stands without needing to know exactly which one of these treatments was applied. Considering that models using the structural diversity and presence of prescribed fires as predictors did not result in important gains in accuracy, we selected models that were not dependent on these treatments (Table 2).\nFor the models in the $δ$-modeling method, the variance of random the effects, $σ ^ δ v 2$, was very small compared to the variance of the model errors, $σ ^ δ ε 2$, (Table 2). This indicated that, in this forest and for these variables, the use of synthetic estimators that do not account for the variability between stands should not cause a strong bias problem.\nModels for the $y$-modeling method showed a pattern similar to that observed for the $δ$-modeling and only included predictors from Set 1 and Set 2 (Table 3). Errors showed non-constant variance patterns for all variables. The predictor most correlated with the variable of interest (i.e., the predictor used to model the error variance) was the same for 2009 and 2015. For V and B, the variance of model errors was a function of the square of the mean LiDAR elevation (Elev_mean2), and the exponents of the error variance function were very close to those obtained in [4,17,27] for V, and in for B. For BA, variance of model errors was a function of the percentage of first returns above two meters (PcFstAbv2). Based on the results of the likelihood ratio tests, that for all variables resulted in p-values larger than 0.87, simplified models were selected and used for prediction.\n\n#### 3.2. General Accuracy Assessment and Comparison of Methods\n\nFor all variables and modeling alternatives, values of $m B i a s$ and $m R B i a s$ were orders of magnitude smaller than $m R M S E$ and $m R R M S E$ (Table 2 and Table 3). For all variables and methods, the percentages of explained variance for the change in V, BA and B were low. For the $δ$-modeling method, models explained 34.38%, 31.37% and 39.04% of the variance of the change in V, BA and B, respectively. For the $y$-modeling method, models explained only 10.65% and 5.37% of V and B, respectively, while for BA the prediction using the $y$-modeling method was not better than the sample mean. In addition, $δ$-models had values of $m R B i a s$ lower than those obtained for the $y$-models. When instead of the change we considered the forest structural attributes with the $y$-modeling method, percentages of explained variance were 82.16% for V, 82.53% for BA and 82.93% for B. Considering only 2009, the percentage of explained variance for V, BA and B was 81.60%, 83.45%, and 82.84%, respectively. Considering only 2015, the percentage of explained variance for V, BA and B was 82.72%, 81.42% and 82.98%, respectively.\n\n#### 3.3.1. Entire Study Area\n\nEstimates for the sampled stands and for the whole study area using either the $δ$-modeling method or the $y$-modeling method were consistent with the estimates obtained using only the field information except for BA and B in SA. For the entire study area values of $R M S E δ$ tended to be smaller than $R M S E y$. When considering the sampled stands, SS, approximate confidence intervals computed as $Δ ^ f ± 2 S E f$ for the field estimates, and as $Δ ^ δ ± 2 R M S E δ$ and $Δ ^ y ± 2 R M S E y$ for each one of the LiDAR based methods, overlapped for all variables (Table 4) and contained estimates derived from other methods. Differences between the uncertainty of estimates obtained from LiDAR-based methods and the uncertainty of estimates obtained from field-based methods tend to be of small magnitude.\n\n#### 3.3.2. Stands\n\nEstimated change of V, BA and B in the sampled stands by both the $δ$-modeling method and the $y$-modeling method agreed with their field-based counterparts in most stands (Figure 2). However, the width of the confidence intervals obtained using the $δ$-modeling method tended to be larger than the confidence intervals of the estimates derived using the $y$-modeling method (Figure 2).\nFor unsampled stands, estimates and confidence intervals had larger variability in stands where the forested area was small (Figure 3). This variability cannot be avoided, and indicates that certain sources of errors cannot be compensated if the number of pixels that are aggregated is low. Finally, for both methods, values of $R M S E δ$ and $R M S E y$ were in the range of 0.25 to 1 m3 ha−1year−1 for V, of 0.02 to 0.15 m2 ha−1year−1 for BA and of 0.10 to 0.80 Mg ha−1year−1 for B. However, for B and V, the $R M S E y$ tended to be smaller than $R M S E δ$ while negligible differences between methods were observed for BA (Figure 3 and Figure 4).\nFor the thinned (and unsampled) stands, differences between $δ$-modeling method and the $y$-modeling method for BA were large and their confidence intervals did not overlap (Figure 3). For these stands, the estimates for BA using the $δ$-modeling method tended to indicate almost no changes in BA. Estimates for the thinned stands using the $δ$-modeling method provided inconsistent results indicating gains in B, and changes close to zero for V and BA. Certain inconsistencies were also observed for stands subject to thinning when using the $y$-modeling method where predictions of the change in V and B were positive for three and five stands respectively. These inconsistencies seem to derive from the fact that the distribution of predictors in Set 3 (i.e., changes in LiDAR predictors) in the thinned stands was rather different to the distribution of these predictors in the sample of field plots, in the sampled stands and in the unsampled and not thinned stands. For predictors of the $y$-modeling method, modeled differences between thinned stands and the remaining groups were of much smaller magnitude. Results for the analysis of the extrapolation risks are presented in detail in Section 3.4.\n\n#### 3.3.3. Pixel-level\n\nFor both methods, inconsistencies observed at the stand level were observed at the pixel-level, especially the positive predictions of change obtained with the $δ$-modeling method in the thinned stands (Figure A1). In addition, due to the low correlations of LiDAR predictors with the change in V, BA, and B, predictions at this level have large uncertainties. Mean and median values of $R M S E δ$ were 2.30 m3 ha−1 year−1 and 3.30 m3 ha−1 year−1 for V, 0.39 m2 ha−1 year−1 and 0.38 m2 ha−1 year−1 for BA, and 1.67 Mg ha−1 year−1and 1.65 Mg ha−1 year−1 for B. Mean and median values of $R M S E y$ were 2.49 m3 ha−1 year−1 and 2.20 m3 ha−1 year−1 for V, 0.48 m2 ha−1 year−1 and 0.48 m2 ha−1 year−1 for BA and 1.89 Mg ha−1 year−1 and 1.76 Mg ha−1 year−1 for B (Table 5 and Figure A1). Predictions from the $δ$-modeling method tend to be smoother than predictions from the $y$-modeling method. For all variables, the proportion of pixel-level predictions using the $δ$-modeling method within the range of values observed for the field plots, was always 99.84% or larger (Figure A2). Considering that these results were obtained in the presence of thinned stands and the relatively small fraction of the forest that was sampled, obtaining less than 0.16% of the predictions outside of the measurement range seems to be a clear sign of over smoothing (see Appendix B).\n\n#### 3.4. Extrapolation to Thinned Stands\n\nEstimates of change in B for the thinned stands by both methods were clearly subject to bias problems. The predicted change in B for the total area subject to thinning for the period 2009–2015, using the $δ$-modeling method was an increase in biomass of 40,469.22 Mg. The predicted change using the $y$-modeling method was a removal of B. However, the predicted removal for the period 2009–2015 was only 1750.29 Mg while the weighted extractions for the thinned stands were orders of magnitude larger. For BA both methods estimated extractions in BA, which is consistent with the fact that these stands were thinned. Estimated changes in BA using the $δ$-modeling method for the thinned stands ranged from −0.05 m2 ha−1 to −1.88 m2 ha−1, which seems to be a very small change in basal area. Estimated changes in BA using the $y$-modeling method ranged from −3.58 m2 ha−1 to −7.01 m2 ha−1. An advantage of the $y$-modeling method is that it allows obtaining the values of the structural attributes at a given point in time. Using the $y$-model we estimated BA for the thinned stands for 2015. For those stands where thinning prescriptions dictated leaving a residual BA of 17.22 m2 ha−1 to 25.25 m2 ha−1, estimated BA for 2015 ranged from 19.87 m2 ha−1 to 26.22 m2 ha−1, which is in accordance with the thinning prescriptions. For the remaining area subject to thinning the estimated BA for 2015 was 17.64 m2 ha−1, while the prescriptions dictated leaving a residual BA ranging from 6.89 m2 ha−1 to 13.77 m2 ha−1 in 75% of the area and leaving the remaining area untouched. In general, the estimated BA for 2015 are consistent with the prescriptions, which indicates that the $y$-modeling method produces reasonable estimates of BA when extrapolating to the thinned stands. In summary, for the estimation of changes, biases derived from extrapolation seemed to be of larger magnitude for the $δ$-modeling method although they were also present for the $y$-modeling method.\nThe extrapolation indexes $N T 2 ¯$ and $N T 2 ¯ m e a n$ showed that predictions in thinned stands, involved a large amount of extrapolation when using the $δ$-modeling method. For the $y$-modeling method, differences between thinned stands and stands not subject to thinning were of smaller magnitude (Figure 5 and Figure A3). The inspection of the distribution of the LiDAR predictors in the field plots, the sampled and not thinned stands, unsampled and not thinned stands and the unsampled and thinned stands showed similar results for all variables, being the distributions of predictors from Set 3 (i.e., changes in LiDAR predictors for the period 2009–2015) very sensitive to the thinning operations (Figure 6, Figure A4 and Figure A5).\n\n## 4. Discusion\n\n#### 4.1. General Accuracy Assessment and Comparison of Methods.\n\nThe smallest values of $m R M S E$ were obtained using the $δ$-method, which is consistent with previous results reported by Poudel et al. for V and B in coastal coniferous forest of Western Oregon and by Temesgen et al. for B in spruce-dominated forest of Alaska (Table 2 and Table 3). We observed, however, smaller differences between methods. Additionally, as observed in previous studies, (e.g., [8,21,35,36]) where LiDAR auxiliary variables showed a much stronger correlation with structural attributes at a given point in time than with their change.\nValues of $m R M S E$ for V were 3.47 m3 ha−1year−1 when using the $δ$-method and 3.76 m3 ha−1 year−1 when using the $y$-method. These values are slightly smaller than the $m R M S E$ obtained by Poudel et al. using the $δ$-method (4.74 m3 ha−1 year−1) and two lidar acquisitions separated in time by five years. For B, $m R M S E$ using the $δ$-method and the $y$-method were, respectively, 1.72 Mg ha−1year−1 and 1.94 Mg ha−1 year−1. These values were very close to those reported by Poudel et al. using the $δ$-method (1.88 Mg ha-1 year-1) and worse than those reported by Temesgen et al. (1.25 Mg ha−1year−1 and 1.63 Mg ha−1 year−1), also using two LiDAR acquisitions separated in time by five years. Values of $R M S E$ for BA were similar to those obtained by Næsset and Gobakken in coniferous forest in Norway, using the $y$-method with log-transformed models and two LiDAR acquisitions that were two years apart from each other. In relative terms, for V and B, the values that we obtained for $m R R M S E$ were considerably larger than those obtained by Poudel et al. . These differences are due to the fact that observed growth rates in Poudel et al are much higher than we observed at BMEF.\n\n#### 4.2.1. Entire Study Area\n\nMost studies on estimation of change of structural variables using repeated LiDAR measurements have focused on analyzing indexes of model fit and reported only global measures of accuracy developed at a plot level. There is an important difference between the values of $R M S E δ$ and $R M S E y$ and $m R M S E δ$ and $m R M S E y$ being $m R M S E δ$ and $m R M S E y$ an order of magnitude larger than $R M S E δ$ and $R M S E y$. Model root mean square errors $m R M S E δ$ and $m R M S E y$ provide an average measure of the errors that can occur when predicting a single pixel. For large areas, there will be some level of compensation of overpredicted and underpredicted pixels. Knowing how important that compensation is requires calculating AOI-specific root mean square errors. These AOI-specific measures cannot be directly derived from $m R M S E$ because $R M S E δ$ and $R M S E y$ consider factors such as the uncertainty in the estimation of the fixed and random effects that are not accounted for in $m R M S E δ$ or $m R M S E y$. The effect of these factors in $R M S E δ$ and $R M S E y$ can cause that the way two models with similar values of $m R M S E$ rank based on this metric could change when attending to $R M S E$. But the most important consequence of the disconnect between $m R M S E δ$ and $m R M S E y$ and AOI-specific measures of uncertainty, is that the former cannot be used as quality controls in LiDAR based inventories.\nWhile numerous studies on estimation of changes using LiDAR rely on global measures of accuracy such as $m R M S E$, exceptions to this trend can be found in the literature [10,12,18,19,20]. The last four studies used model assisted techniques to derive either landscape or stratum level changes. Reported errors in those studies changed depending on the modeling techniques and study areas, but they all were of similar magnitude for changes in biomass per hectare and year (Table 4). Errors for the methods tested in this study were smaller than those reported by , where changes in live carbon stocks in Norway were estimated using generalized regression estimators (GREG). Differences with the errors reported in for carbon, using the same 0.5 biomass to carbon conversion factor, were in the range of 0.12–0.09 Mg ha−1 year−1. These differences seem to be due to multiple factors such as differences between study areas, changes in live biomass versus changes in standing biomass, time between LiDAR acquisitions and field plot sizes etc. Further investigation is needed to test if the model-based estimators studied here and the GREG estimators in have a similar performance when used under the same conditions.\nThe study from Magnussen et al. also included model based estimators using the $y$-modeling method. Reported errors were slightly larger than the ones observed here but at the same time smaller in terms relative to the observed mean change. An important result from the comparisons of was the drastic improvement in model accuracy when developing stratum specific models (i.e., a set of model coefficients per stratum) as opposed to a global model for the whole study area. The mixed effect models used in this study can be used in combination with stratification if sample sizes are large. The introduction of stand level random effects allows for certain variability between AOIs that can be applied in situations where AOI sample sizes are limited.\n\n#### 4.2.2. Stands\n\nOne of the novelties of this study was the analysis of estimates for AOIs with small sample sizes to develop AOI specific models (i.e., stands). While at large scales both LiDAR-based and field-based estimates were very similar and had equivalent accuracies, at the stand-level, LiDAR based estimates, clearly had smaller errors than their field-based counterparts do. Qualitatively, this result for the change in V, BA and B is similar to the results obtained in [15,17] for the structural variables themselves and shows that the LiDAR auxiliary information allows for gains in efficiency when estimating changes in AOIs with small sample sizes. However, due to the low correlation of LiDAR and structural changes, values of $C V δ$ and $C V y$ were oftentimes larger than 50%. These values of $C V$ are larger than those observed for structural variables in similar AOIs in previous studies [4,14,17]. While differences were not of large magnitude $R M S E y$, tended to be smaller than $R M S E δ$. In addition, $R M S E y$ had a larger variability because errors did not have constant variance. Finally, stand level estimates using the $δ$-modeling method in the thinned and unsampled stands involved an important degree of extrapolation that can cause inconsistent estimates and severe biases, which indicates that the $δ$-modeling method is more sensitive to extrapolating than the $y$-modeling method.\n\n#### 4.2.3. Pixel-level\n\nFor the most detailed level of disaggregation, the magnitude of the errors was very large. This is due to the low correlation between LiDAR auxiliary variables and the change in structural attributes. First-order and second-order texture indexes are auxiliary variables with a promising potential for future research aiming to improve the prediction of structural changes. While for structural variables, maps at the pixel-level can provide a reliable reference about the forest structure; for growth and changes, pixel-level maps like the one in Figure A1 should be taken as mere approximations. They could be used to infer certain trends and patterns, but the high values of $R M S E δ$ and $R M S E y$ show that estimates for a particular location made at the pixel scale can differ significantly with reality. These results clearly indicate that, predictions at such a fine scale are highly unreliable, and it is necessary either to perform some level of spatial aggregation or to increase the lapse between LiDAR acquisitions.\n\n#### 4.3. Advantages of Modeling Alternatives\n\nIn general, the $δ$-modeling method was found to be a better alternative to estimate changes for the entire study area than the $y$-modeling method; however, the $y$-modeling method produced better results at the stand-level and also seemed to be advantageous to prevent problems related to extrapolation to values of the covariates outside of those included in model development.\nThe $δ$-modeling method offers a faster model developments and fitting, and is significantly simpler than the modeling with the $y$-modeling method, as it is not necessary to consider differences between years and time correlations. The main disadvantage of this method is that it seems to be more prone to extrapolation errors. Predictors from Set 3 are sensitive to intense changes in the forest structure caused for example by thinning (see Figure 5, Figure A4, Figure A5 and Figure 6). The inspection of predictors of alternative models for V and B using this method revealed that inconsistencies of predictions in unthinned stands could be attenuated including more predictors from Set 1. The sensitivity to changes of predictors from Set 3 can be an advantage if all possible changes are correctly represented in the field sample. However, for relatively short periods of time between acquisitions and a low amount of forest operations, changes that are not very frequent in the landscape can be misrepresented or even not included in the sample. Thus, results for areas subject to those changes can be severely biased and inconsistent.\nThe more complex model development for the $y$-modeling method may be compensated by its ability to produce a richer set of outputs, by its apparently smaller risk of extrapolation and by its more accurate estimates for AOIs with small sample sizes (i.e., stands). In this study we analyzed the performance of the $y$-modeling method when estimating change, but estimates of V, BA and B for all the AOIs in 2009 and 2015 could have been readily obtained using this method. Results from our study also support the idea that the $y$-modeling method has advantages over the $δ$-modeling method in terms of protection against inconsistent extrapolations. The distributions of predictors from Set 1 and Set 2 in thinned stands were relatively similar to the distributions observed for the sample while the distributions of predictors in Set 3 used in the $δ$-modeling method, these distributions were rather different (see Figure 5, Figure A4, Figure A5 and Figure 6). The greater similarity between thinned stands and the sample of field plots, for predictors from Set 1 and Set 2, indicates that the effect of thinning, in terms of auxiliary information, can be seen as transition from one situation in 2009 to another in 2015, and both seem to be represented in the field sample (e.g., Figure 5). If structures before and after the thinning (or other changes) are represented in the sample, the need for extrapolation will be limited. Within certain limits, if the sampling design covers all structures present at both points in time, even if there is a particular change from one structure to another that is not represented in the sample, predictions from the $y$-modeling method will not involve large extrapolations.\n\n## 5. Conclusions\n\nThe four main conclusions obtained from this study include:\n• The change of structural attributes and LiDAR auxiliary information are weakly correlated. This weak correlation seems to more evident in BMEF than in previous studies because of the slower growth in the study area and the relatively short lapse of time between LiDAR acquisitions, which indicates that for future studies in similar areas it might be necessary to increase the time lags between LiDAR flights.\n• In general, the $δ$-modeling method was found to be a slightly more accurate alternative to obtain estimates of change for the whole study area; however, the $y$-modeling method was able to produce better estimates at the stand level. In addition, the $y$-modeling method method also seemed to be less prone to extrapolation problems. This indicates that field campaigns for the $δ$-modeling method have to be carefully designed while the $y$-modeling method might be less sensitive to certain bias problems.\n• Despite the weak correlations with the changes in structural attributes, LiDAR auxiliary information allows obtaining estimates of growth for stands that improve over those derived using only field information.\n• The large uncertainty observed for pixel-level predictions indicated that high-resolution maps of growth, generated using LiDAR auxiliary information in similar conditions, should be taken as approximated products.\n\n## Author Contributions\n\nM.R. and B.W. developed the funding acquisition and data collection procedures. F.M. conceptualized and conducted the analyses and wrote the manuscript daft. M.R. and B.W. also participated in the conceptualization of the study. H.T., V.M., B.F. and A.H. provided significant input for the analyses and throughout the manuscript preparation.\n\n## Funding\n\nThis research received no external funding.\n\n## Acknowledgments\n\nWe would like to thank the USDA Forest Service, Region 5 and Lassen National Forest for assistance in project implementation, and the Guest Editor and two anonymous Reviewers for their constructive comments.\n\n## Conflicts of Interest\n\nThe authors declare no conflict of interest.\n\n## Appendix A\n\nTable A1. Sets of candidate predictors used in the study. Predictors included in the models to predict structural changes are highlighted with a boldface font. HiD, LoD and RNA represent the high diversity, low diversity and research natural areas respectively.\nTable A1. Sets of candidate predictors used in the study. Predictors included in the models to predict structural changes are highlighted with a boldface font. HiD, LoD and RNA represent the high diversity, low diversity and research natural areas respectively.\nDescription Auxiliary Variables Sets 1, 2 and 3AcronymDescription Auxiliary Variables Set 4Acronym\nSet 1 Year: 2009Set 2 Year: 2015Set 3, Difference 2015-2009Set 4\nMinimum, maximum, mean, mode, standard deviation, variance, coefficient of variation and interquartile range of the distribution of heights of the point cloud.Elev_min09Elev_min15$δ$Elev_min15-09Incoming solar radiationSolar_radiation\nElev_max09Elev_max15$δ$Elev_max15-09\nElev_mean09Elev_mean15$δ$Elev_mean15-09Structural diversity, factor with three levels HiD, LoD and RNA. Coded using two dummy variables. RNA reference level.HiD\nElev_mean209Elev_mean215$δ$Elev_mean215-09\nElev_mode09Elev_mode15$δ$Elev_mode15-09\nElev_stddv09Elev_stddv15$δ$Elev_stddv15-09LoD\nElev_var09Elev_var15$δ$Elev_var15-09\nElev_CV09Elev_CV15$δ$Elev_CV15-09Presence absence of prescribed fires. Coded using a dummy variable taking value 1 for stands where prescribed fires are applied and 0 otherwise.Burned\nElev_IQ09Elev_IQ15$δ$Elev_IQ15-09\nElev_AAD09Elev_AAD15$δ$Elev_AAD15-09\nElev_MADmed09Elev_MADmed15$δ$Elev_MADmed15-09\nElev_MADmod09Elev_MADmod15$δ$Elev_MADmod15-09\nPercentiles of the distribution of heights of the point cloud.Elev_P0109Elev_P0115$δ$Elev_P0115-09\nElev_P0509Elev_P0515$δ$Elev_P0515-09\nElev_P1009Elev_P1015$δ$Elev_P1015-09\nElev_P2009Elev_P2015$δ$Elev_P2015-09\nElev_P3009Elev_P3015$δ$Elev_P3015-09\nElev_P4009Elev_P4015$δ$Elev_P4015-09\nElev_P5009Elev_P5015$δ$Elev_P5015-09\nElev_P6009Elev_P6015$δ$Elev_P6015-09\nElev_P7009Elev_P7015$δ$Elev_P7015-09\nElev_P7509Elev_P7515$δ$Elev_P7515-09\nElev_P8009Elev_P8015$δ$Elev_P8015-09\nElev_P9009Elev_P9015$δ$Elev_P9015-09\nElev_P9509Elev_P9515$δ$Elev_P9515-09\nElev_P9909Elev_P9915$δ$Elev_P9915-09\nCanopy relief ratioCRR09CRR15$δ$CRR15-09\nPercentage of first (Fst) and all (All) returns above 2 mPcFstAbv209PcFstAbv215$δ$PcFstAbv215-09\nPcAllAbv209PcAllAbv215$δ$PcAllAbv215-09\nRatio all returns above 2 m to first returnsAllAbv2Fst09AllAbv2Fst15$δ$AllAbv2Fst15-09\nPercentage of first returns above the mean and modePcFstAbvMean09PcFstAbvMean15$δ$PcFstAbvMean15-09\nPcFstAbvMode09PcFstAbvMode15$δ$PcFstAbvMode15-09\nPercentage of all returns above the mean and modePcAllAbvMean09PcAllAbvMean15$δ$PcAllAbvMean15-09\nPcAllAbvMode09PcAllAbvMode15$δ$PcAllAbvMode15-09\nRatio of all returns above the mean and mode to number of first returnsAllAbvMeanFst09AllAbvMeanFst15$δ$AllAbvMeanFst15-09\nAllAbvModeFst09AllAbvModeFst15$δ$AllAbvModeFst15-09\nProportion of points in the height intervals [0,0.5), [0.5,1), [1,2), [2,4), [4,8) and [8,16) meters.Prop0_0509Prop0_0515$δ$Prop0_0515-09\nProp05_109Prop05_115$δ$Prop05_115-09\nProp1_209Prop1_215$δ$Prop1_215-09\nProp2_409Prop2_415$δ$Prop2_415-09\nProp4_809Prop4_815$δ$Prop4_815-09\nProp8_1609Prop8_1615$δ$Prop8_1615-09\n\n## Appendix B\n\nPredictions from the $δ$-modeling method tend to be smoother than predictions from the $y$-modeling method (Figure A1). For all variables, the proportion of pixel-level predictions using the $δ$-modeling method within the range of values observed for the field plots, was always 99.84% or larger. Considering the presence of thinned stands and the relatively small fraction of the forest that is sampled. Obtaining less than 0.15% of the predictions outside of the measurement range seems to be a clear sign of over smoothing. Predictions using the $y$-modeling method showed a greater variability, especially for BA, and the proportions of predictions inside the range of observed values, $P y$, were 99.45% for V, 95.82% for BA and 99.29% for B. For BA, pixel-level predictions using the $y$-modeling method were oftentimes negative and of larger magnitude than the changes in BA observed for the plots. However, these pixels represent a small proportion of the total predictions (i.e., 4.02%), and a significant portion of them correspond to pixels in thinned stands. It is important to note that these comparisons of predicted values inform about how similar predictions are by the two analyzed methods and cannot be considered as indicators of accuracy or reliability. For all variables, pixel-level predictions by both methods were strongly correlated with Pearson’s correlation coefficients of 0.92, 0.82 and 072 for V, BA and B, respectively (Figure A2). Finally, considering the unsampled and thinned stands, pixel-level predictions obtained by both methods showed the same inconsistencies observed at the stand-level especially for B using the $δ$-modeling method where only about 4%, of the predicted values were negative (i.e., removals of B). These inconsistencies are clearly due to extrapolations in the thinned stands and are analyzed in more detail in next section.\nFigure A1. Maps of change in V, BA and B and corresponding pixel-level RMSE maps for the $δ$-modeling method.\nFigure A1. Maps of change in V, BA and B and corresponding pixel-level RMSE maps for the $δ$-modeling method.", null, "Figure A2. Comparison pixel-level predictions for V, BA and B using the $δ$-modeling method and $y$-modeling method predictions for the unsampled stands subject to thinnings are in red. The range of V, BA and B observed in the sample is indicated by the grey ribbons. The proportions, $P δ$ and $P y$, of predictions within the range of values observed in the sample, and the correlation between predictions from both methods are indicated in the upper left corner. The proportion of pixels in the thinned stands where the $δ$-modeling method and $y$-modeling method predict losses (i.e., $P ( δ ^ i , δ < 0 )$ and $P ( δ ^ i , y < 0 )$) are indicated on the lower left quadrant of the figure.\nFigure A2. Comparison pixel-level predictions for V, BA and B using the $δ$-modeling method and $y$-modeling method predictions for the unsampled stands subject to thinnings are in red. The range of V, BA and B observed in the sample is indicated by the grey ribbons. The proportions, $P δ$ and $P y$, of predictions within the range of values observed in the sample, and the correlation between predictions from both methods are indicated in the upper left corner. The proportion of pixels in the thinned stands where the $δ$-modeling method and $y$-modeling method predict losses (i.e., $P ( δ ^ i , δ < 0 )$ and $P ( δ ^ i , y < 0 )$) are indicated on the lower left quadrant of the figure.", null, "## Appendix C\n\nFigure A3. Indexes of extrapolation. Average of Mesgaran’s novelty index relative to the mean, $N T 2 ¯ m e a n$, for the sampled and not thinned stands (dark blue), unsampled stands not thinned (green) and unsampled and thinned stands (red). The value of this index for the field plots (light blue) provides the baseline value (i.e., the value observed for the sample of field plots).\nFigure A3. Indexes of extrapolation. Average of Mesgaran’s novelty index relative to the mean, $N T 2 ¯ m e a n$, for the sampled and not thinned stands (dark blue), unsampled stands not thinned (green) and unsampled and thinned stands (red). The value of this index for the field plots (light blue) provides the baseline value (i.e., the value observed for the sample of field plots).", null, "Figure A4. Comparison of density functions for the predictors in the models used to estimate changes in Basal Area using the $δ$-modeling method and $y$-modeling method in field plots (light blue), sampled and not thinned stands (dark blue), unsampled and not thinned stands (light blue) and unsampled and thinned stands (red). For each group the area of overlap, AO, with the density function for the field plots (green) is provided for each predictor.\nFigure A4. Comparison of density functions for the predictors in the models used to estimate changes in Basal Area using the $δ$-modeling method and $y$-modeling method in field plots (light blue), sampled and not thinned stands (dark blue), unsampled and not thinned stands (light blue) and unsampled and thinned stands (red). For each group the area of overlap, AO, with the density function for the field plots (green) is provided for each predictor.", null, "Figure A5. Comparison of density functions for the predictors in the models used to estimate changes in Biomass using the $δ$-modeling method and $y$-modeling method in field plots (light blue), sampled and not thinned stands (dark blue), unsampled and not thinned stands (light blue) and unsampled and thinned stands (red). For each group the area of overlap, AO, with the density function for the field plots (green) is provided for each predictor.\nFigure A5. Comparison of density functions for the predictors in the models used to estimate changes in Biomass using the $δ$-modeling method and $y$-modeling method in field plots (light blue), sampled and not thinned stands (dark blue), unsampled and not thinned stands (light blue) and unsampled and thinned stands (red). For each group the area of overlap, AO, with the density function for the field plots (green) is provided for each predictor.", null, "## References\n\n1. Næsset, E. Predicting forest stand characteristics with airborne scanning laser using a practical two-stage procedure and field data. Remote. Sens. Environ. 2002, 80, 88–99. [Google Scholar] [CrossRef]\n2. Andersen, H.-E.; McGaughey, R.J.; Reutebuch, S.E. Estimating forest canopy fuel parameters using LIDAR data. Remote. Sens. Environ. 2005, 94, 441–449. [Google Scholar] [CrossRef]\n3. González-Ferreiro, E.; Diéguez-Aranda, U.; Miranda, D. Estimation of stand variables in Pinus radiata D. Don plantations using different LiDAR pulse densities. For. Int. J. For. Res. 2012, 85, 281–292. [Google Scholar] [CrossRef]\n4. Mauro, F.; Molina, I.; García-Abril, A.; Valbuena, R.; Ayuga-Téllez, E. Remote sensing estimates and measures of uncertainty for forest variables at different aggregation levels. Environmetrics 2016, 27, 225–238. [Google Scholar] [CrossRef]\n5. Valbuena, R.; Packalen, P.; Mehtätalo, L.; García-Abril, A.; Maltamo, M. Characterizing forest structural types and shelterwood dynamics from Lorenz-based indicators predicted by airborne laser scanning. Can. J. For. Res. 2013, 43, 1063–1074. [Google Scholar] [CrossRef]\n6. Eggleston, H.S.; Buendia, L.; Miwa, K.; Ngara, T.; Tanabe, K. IPCC Guidelines for National Greenhouse Gas Inventories, Volume 4: Agriculture, Forestry and Other Land Use; Institute for Global Environmental Strategies: Hayama, Japan, 2006; Volume 4, ISBN 4-88788-032-4. [Google Scholar]\n7. Babcock, C.; Finley, A.O.; Bradford, J.B.; Kolka, R.; Birdsey, R.; Ryan, M.G. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients. Remote. Sens. Environ. 2015, 169, 113–127. [Google Scholar] [CrossRef][Green Version]\n8. Poudel, K.P.; Flewelling, J.W.; Temesgen, H. Predicting Volume and Biomass Change from Multi-Temporal Lidar Sampling and Remeasured Field Inventory Data in Panther Creek Watershed, Oregon, USA. Forests 2018, 9, 28. [Google Scholar] [CrossRef]\n9. Temesgen, H.; Strunk, J.; Andersen, H.-E.; Flewelling, J. Evaluating different models to predict biomass increment from multi-temporal lidar sampling and remeasured field inventory data in south-central Alaska. Math. Comput. For. Nat.-Resour. Sci. (MCFNS) 2015, 7, 66–80. [Google Scholar]\n10. McRoberts, R.E.; Næsset, E.; Gobakken, T.; Chirici, G.; Condes, S.; Hou, Z.; Saarela, S.; Chen, Q.; Stahl, G.; Walters, B.F. Assessing components of the model-based mean square error estimator for remote sensing assisted forest applications. Can. J. For. Res. 2018, 48, 642–649. [Google Scholar] [CrossRef]\n11. Næsset, E.; Gobakken, T.; Solberg, S.; Gregoire, T.G.; Nelson, R.; Ståhl, G.; Weydahl, D. Model-assisted regional forest biomass estimation using LiDAR and InSAR as auxiliary data: A case study from a boreal forest area. Remote. Sens. Environ. 2011, 115, 3599–3614. [Google Scholar] [CrossRef]\n12. Massey, A.; Mandallaz, D. Design-based regression estimation of net change for forest inventories. Can. J. For. Res. 2015, 45, 1775–1784. [Google Scholar] [CrossRef]\n13. Rao, J.N.K.; Molina, I. Introduction. In Small Area Estimation, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2015; pp. 1–8. ISBN 978-1-118-73585-5. [Google Scholar]\n14. Breidenbach, J.; Astrup, R. Small area estimation of forest attributes in the Norwegian National Forest Inventory. Eur. J. For. Res. 2012, 131, 1255–1267. [Google Scholar] [CrossRef]\n15. Mauro, F.; Monleon, V.J.; Temesgen, H.; Ford, K.R. Analysis of area level and unit level models for small area estimation in forest inventories assisted with LiDAR auxiliary information. PLoS ONE 2017, 12, e0189401. [Google Scholar] [CrossRef] [PubMed]\n16. Goerndt, M.E.; Monleon, V.J.; Temesgen, H. Small-Area Estimation of County-Level Forest Attributes Using Ground Data and Remote Sensed Auxiliary Information. For. Sci. 2013, 59, 536–548. [Google Scholar] [CrossRef]\n17. Breidenbach, J.; Magnussen, S.; Rahlf, J.; Astrup, R. Unit-level and area-level small area estimation under heteroscedasticity using digital aerial photogrammetry data. Remote. Sens. Environ. 2018, 212, 199–211. [Google Scholar] [CrossRef]\n18. Magnussen, S.; Næsset, E.; Gobakken, T. LiDAR-supported estimation of change in forest biomass with time-invariant regression models. Can. J. For. Res. 2015, 45, 1514–1523. [Google Scholar] [CrossRef]\n19. Næsset, E.; Bollandsås, O.M.; Gobakken, T.; Gregoire, T.G.; Ståhl, G. Model-assisted estimation of change in forest biomass over an 11year period in a sample survey supported by airborne LiDAR: A case study with post-stratification to provide “activity data”. Remote. Sens. Environ. 2013, 128, 299–314. [Google Scholar] [CrossRef][Green Version]\n20. Nasset, E.; Gobakken, T. Estimating forest growth using canopy metrics derived from airborne laser scanner data. Remote. Sens. Environ. 2005, 96, 453–465. [Google Scholar] [CrossRef]\n21. Ritchie, M.W. Multi-scale reference conditions in an interior pine-dominated landscape in northeastern California. Ecol. Manag. 2016, 378, 233–243. [Google Scholar] [CrossRef]\n22. Adams, M.B.; Loughry, L.H.; Plaugher, L.L. Experimental Forests and Ranges of the USDA Forest Service; United States Department of Agriculture, Forest Service, Northeastern Research Station: Newton Square, PA, USA, 2008; p. 191.\n23. Oliver, W.W. Ecological Research at the Blacks Mountain Experimental Forest in Northeastern California; United States Department of Agriculture, Forest Service, Pacific Southwest Research Station: Albany, CA, USA, 2000; p. 73.\n24. Wing, B.M.; Ritchie, M.W.; Boston, K.; Cohen, W.B.; Olsen, M.J. Individual snag detection using neighborhood attribute filtered airborne lidar data. Remote. Sens. Environ. 2015, 163, 165–179. [Google Scholar] [CrossRef]\n25. Hudak, A.T.; Strand, E.K.; Vierling, L.A.; Byrne, J.C.; Eitel, J.U.; Martinuzzi, S.; Falkowski, M.J. Quantifying aboveground forest carbon pools and fluxes from repeat LiDAR surveys. Remote. Sens. Environ. 2012, 123, 25–40. [Google Scholar] [CrossRef]\n26. Area Solar Radiation—Help | ArcGIS Desktop. Available online: http://desktop.arcgis.com/en/arcmap/10.6/tools/spatial-analyst-toolbox/area-solar-radiation.htm (accessed on 8 April 2019).\n27. Mauro, F.; Monleon, V.; Temesgen, H.; Ruiz, L. Analysis of spatial correlation in predictive models of forest variables that use LiDAR auxiliary information. Can. J. For. Res. 2017, 47, 788–799. [Google Scholar] [CrossRef][Green Version]\n28. Rao, J.N.K.; Molina, I. Basic Unit Level Model. In Small Area Estimation, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2015; pp. 173–234. ISBN 978-1-118-73585-5. [Google Scholar]\n29. Rao, J.; Molina, I. Empirical Best Linear Unbiased Prediction (EBLUP): Theory. In Small Area Estimation, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2015; pp. 97–122. [Google Scholar]\n30. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2018. [Google Scholar]\n31. Pinheiro, J.; Bates, D.; DebRoy, S.; Sarkar, D.; R Core Team. nlme: Linear and Nonlinear Mixed Effects Models. R package version 3.1-137. Available online: https://cran.r-project.org/web/packages/nlme/index.html (accessed on 15 April 2019).\n32. Datta, G.S.; Lahiri, P. A unified measure of uncertainty of estimated best linear unbiased predictors in small area estimation problems. Stat. Sin. 2000, 10, 613–628. [Google Scholar]\n33. Silverman, B.W. Density Estimation for Statistics and Data Analysis; Chapman and Hall: Boca Raton, FL, USA, 1986. [Google Scholar]\n34. Mesgaran, M.B.; Cousens, R.D.; Webber, B.L. Here be dragons: A tool for quantifying novelty due to covariate range and correlation change when projecting species distribution models. Divers. Distrib. 2014, 20, 1147–1159. [Google Scholar] [CrossRef]\n35. Bollandsås, O.M.; Gregoire, T.G.; Næsset, E.; Øyen, B.-H. Detection of biomass change in a Norwegian mountain forest area using small footprint airborne laser scanner data. Stat. Methods Appl. 2013, 22, 113–129. [Google Scholar] [CrossRef]\n36. Fekety, P.A.; Falkowski, M.J.; Hudak, A.T. Temporal transferability of LiDAR-based imputation of forest inventory attributes. Can. J. For. Res. 2014, 45, 422–435. [Google Scholar] [CrossRef]\n37. Ozdemir, I.; Donoghue, D.N. Modelling tree size diversity from airborne laser scanning using canopy height models with image texture measures. For. Ecol. Manag. 2013, 295, 28–37. [Google Scholar] [CrossRef]\nFigure 1. Study area location map, delineated stands and field plots, and detailed diagram showing the light detection and ranging LiDAR field plots grid over the permanent Blacks Mountains Experimental Forest (BMEF) grid of permanent makers.\nFigure 1. Study area location map, delineated stands and field plots, and detailed diagram showing the light detection and ranging LiDAR field plots grid over the permanent Blacks Mountains Experimental Forest (BMEF) grid of permanent makers.", null, "Figure 2. Estimates of V, BA and B change for the sampled stands of Blacks Mountains Experimental Forest. LiDAR-derived estimates using the $δ$-modeling method are indicated by blue dots, LiDAR-derived estimates obtained using the $y$-modeling method are indicated with red dots and field-based estimates are indicated using black.\nFigure 2. Estimates of V, BA and B change for the sampled stands of Blacks Mountains Experimental Forest. LiDAR-derived estimates using the $δ$-modeling method are indicated by blue dots, LiDAR-derived estimates obtained using the $y$-modeling method are indicated with red dots and field-based estimates are indicated using black.", null, "Figure 3. Estimates of V, BA and B change for the unsampled stands of Blacks Mountains Experimental Forest. LiDAR-derived estimates using the $δ$-modeling method are indicated by blue dots and LiDAR-derived estimates obtained using the $y$-modeling method are indicated with red dots. Thinned stands are to the left and non-thinned stands to the right.\nFigure 3. Estimates of V, BA and B change for the unsampled stands of Blacks Mountains Experimental Forest. LiDAR-derived estimates using the $δ$-modeling method are indicated by blue dots and LiDAR-derived estimates obtained using the $y$-modeling method are indicated with red dots. Thinned stands are to the left and non-thinned stands to the right.", null, "Figure 4. Values of $R M S E δ$ (blue), $R M S E y$ (red) and $S E f$ (black) for the stand-level estimates of V, BA and B.\nFigure 4. Values of $R M S E δ$ (blue), $R M S E y$ (red) and $S E f$ (black) for the stand-level estimates of V, BA and B.", null, "Figure 5. Indexes of extrapolation. Average of Mesgaran’s novelty index , $N T 2 ¯$, for the sampled and not thinned stands (dark blue), unsampled stands not thinned (green) and unsampled and thinned stands (red). The value of this index for the field plots (light blue) provides the baseline value (i.e., the value observed for the sample of field plots).\nFigure 5. Indexes of extrapolation. Average of Mesgaran’s novelty index , $N T 2 ¯$, for the sampled and not thinned stands (dark blue), unsampled stands not thinned (green) and unsampled and thinned stands (red). The value of this index for the field plots (light blue) provides the baseline value (i.e., the value observed for the sample of field plots).", null, "Figure 6. Comparison of density functions for the predictors in the models used to estimate changes in Volume using the $δ$-modeling method and $y$-modeling method in field plots (light blue), sampled and not thinned stands (dark blue), unsampled and not thinned stands (light blue) and unsampled and thinned stands (red). For each group the area of overlap, AO, with the density function for the field plots (green) is provided for each predictor.\nFigure 6. Comparison of density functions for the predictors in the models used to estimate changes in Volume using the $δ$-modeling method and $y$-modeling method in field plots (light blue), sampled and not thinned stands (dark blue), unsampled and not thinned stands (light blue) and unsampled and thinned stands (red). For each group the area of overlap, AO, with the density function for the field plots (green) is provided for each predictor.", null, "Table 1. Minimum (Min), mean (Mean), standard deviation (Sd), and maximum (Max) of the plot-level values for 2009, 2015 and yearly increments for the period 2009–2015. Values of volume V, basal area BA and biomass B are expressed on a per-hectare basis.\nTable 1. Minimum (Min), mean (Mean), standard deviation (Sd), and maximum (Max) of the plot-level values for 2009, 2015 and yearly increments for the period 2009–2015. Values of volume V, basal area BA and biomass B are expressed on a per-hectare basis.\nVariable (Units)PeriodMinMeanSdMax\nV(m3 ha−1)200919.87166.93119.66619.43\nBA(m2 ha−1)3.8123.4312.0266.54\nB(Mg ha−1)8.3183.6561.55323.30\nV(m3 ha−1)201517.20175.52117.04644.30\nBA(m2 ha−1)3.4225.4512.0167.47\nB(Mg ha−1)8.3489.3860.29335.03\nV(m3 ha−1year−1)Increment 2009–2015−10.891.433.8811.19\nBA(m2 ha−1year−1)−0.910.340.451.74\nB(Mg ha−1year−1)−5.810.951.975.99\nTable 2. Summary models for the $δ$-modeling method. Model coefficients, standard errors of the model coefficients, variance parameters and general metrics for accuracy assessment are provided. Predictor acronyms are explained in Table A1. Coef is the value of the coefficient and Std.Error its corresponding standard error. V indicates volume, BA indicates basal area and B indicates biomass.\nTable 2. Summary models for the $δ$-modeling method. Model coefficients, standard errors of the model coefficients, variance parameters and general metrics for accuracy assessment are provided. Predictor acronyms are explained in Table A1. Coef is the value of the coefficient and Std.Error its corresponding standard error. V indicates volume, BA indicates basal area and B indicates biomass.\nModelPredictorCoefStd. Error$σ ^ δ v 2$$σ ^ δ ε 2$$m R M S E$$m R R M S E$$m B i a s$$m R B i a s$\nV(m3 ha−1 year−1)Intercept1.160.310.5010.533.47241.99%−1.83 × 10−4−0.01%\n$δ$Elev_P5015-091.330.27\n$δ$PcFstAbv215-090.230.07\nBA(m2 ha−1 year−1)Intercept0.310.120.010.140.39116.30%−8.2 × 10−4−0.25%\n$δ$PcAllAbv215-090.050.01\nElev_P7509−0.030.01\nPcAllAbv215-090.02<0.01\nB(Mg ha−1 year−1)Intercept1.030.170.192.521.72180.20%−1.09 × 10−3−0.11%\n$δ$Elev_var15-090.050.02\n$δ$Elev_P5015-091.030.20\n$δ$CRR15-09−16.676.58\nTable 3. Summary models for the $y$-modeling method. Model coefficients, standard errors of the model coefficients, variance-covariance parameters and general metrics for accuracy assessment are provided. Covariate acronyms are explained in Table A1. Coef is the value of the coefficient and Std.Error its corresponding standard error. Coef is the value of the coefficient and Std.Error its corresponding standard error. V indicates volume, BA indicates basal area and B indicates biomass.\nTable 3. Summary models for the $y$-modeling method. Model coefficients, standard errors of the model coefficients, variance-covariance parameters and general metrics for accuracy assessment are provided. Covariate acronyms are explained in Table A1. Coef is the value of the coefficient and Std.Error its corresponding standard error. Coef is the value of the coefficient and Std.Error its corresponding standard error. V indicates volume, BA indicates basal area and B indicates biomass.\nModelYearCovariateCoefStd.Error$σ ^ y u 2$Kijt$ω y t$$σ ^ y t ε 2$$ρ e$General Accuracy Metrics for Change Per Hectare and Year\n$m R M S E$$m R R M S E$$m B I A S$$m R B i a s$\nV(m3 ha−1)2009Intercept−19.0910.36640.29Elev_mean2090.643.000.853.76262.62%0.139.24%\nElev_mean2092.520.23\nPcFstAbv2090.630.05\n2015Intercept2.690.23Elev_mean2150.614.17\nElev_mean2150.690.05\nPcFstAbv215−26.3011.10\nBA(m2 ha−1)2009Intercept−0.221.577.42PcFstAbv2090.480.810.850.47138.06%0.011.53%\nElev_P1009−1.370.34\nElev_P30091.580.24\nPcFstAbv2090.510.03\n2015Intercept−2.160.61PcFstAbv2150.451.12\nElev_P10152.560.51\nElev_P20150.570.03\nPcFstAbv215−0.971.72\nB(Mg ha−1)2009Intercept−11.865.19165.69Elev_mean2090.710.390.851.94203.69%0.088.60%\nElev_mean2091.190.11\nPcFstAbv2090.340.02\n2015Intercept1.310.12Elev_mean2150.581.47\nElev_mean2150.370.03\nPcFstAbv215−14.155.76\nTable 4. Average increments of volume V, basal area BA and biomass B in the entire study area excluding the thinned stands (SA) and for the union of the sampled stands (SS). Estimates $( Δ ^ )$, root mean square errors $( R M S E )$, coefficients of variation $( C V )$ and confidence intervals ($C I$) obtained using the $δ$-modeling method and the $y$-modeling method are compared to estimates ($Δ ^ f$), standard errors ($S E f$) coefficients of variation ($C V f$), and confidence intervals ($C I f$) using only the field information.\nTable 4. Average increments of volume V, basal area BA and biomass B in the entire study area excluding the thinned stands (SA) and for the union of the sampled stands (SS). Estimates $( Δ ^ )$, root mean square errors $( R M S E )$, coefficients of variation $( C V )$ and confidence intervals ($C I$) obtained using the $δ$-modeling method and the $y$-modeling method are compared to estimates ($Δ ^ f$), standard errors ($S E f$) coefficients of variation ($C V f$), and confidence intervals ($C I f$) using only the field information.\nVariableArea$δ$-modeling Method$y$-modeling MethodField Only Estimates\n$Δ ^ δ$$R M S E δ$$C V δ$$C I δ$$Δ ^ y$$R M S E y$$C V y$$C I y$$Δ ^ f$$S E f$$C V f$$C I f$\nV(m3 ha−1 year−1)SS1.660.2716.29%1.122.201.950.3216.48%1.312.601.430.3222.21%0.802.07\nSA1.670.3017.98%1.072.271.980.2914.67%1.402.56\nBA(m2 ha−1 year−1)SS0.360.038.68%0.300.420.370.049.93%0.300.450.340.0410.87%0.260.41\nSA0.420.048.41%0.350.490.440.049.61%0.350.52\nB(Mg ha−1 year−1)SS1.070.1312.35%0.811.341.240.1713.61%0.901.570.950.1616.89%0.631.28\nSA1.150.1613.66%0.831.461.290.1511.83%0.981.59\nTable 5. Minimum (Min), 5th percentile (p05), mean, median, 95th percentile (p95) and maximum (Max) of $R M S E δ$ (27) and $R M S E y$ (28) for the pixels of the study area.\nTable 5. Minimum (Min), 5th percentile (p05), mean, median, 95th percentile (p95) and maximum (Max) of $R M S E δ$ (27) and $R M S E y$ (28) for the pixels of the study area.\nVariableMethodMinp05MeanMedianp95Max\nV(m3 ha−1 year−1)$δ$-modeling method0.420.422.303.303.599.41\n$y$-modeling method0.080.372.492.206.0132.69\nBA(m2 ha−1 year−1)$δ$-modeling method0.380.380.390.380.400.59\n$y$-modeling method0.110.300.480.480.641.47\nB(Mg ha−1 year−1)$δ$-modeling method1.621.631.671.651.764.57\n$y$-modeling method0.471.101.891.763.0910.45" ]
[ null, "https://www.mdpi.com/remotesensing/remotesensing-11-00923/article_deploy/html/images/remotesensing-11-00923-g0A1.png", null, "https://www.mdpi.com/remotesensing/remotesensing-11-00923/article_deploy/html/images/remotesensing-11-00923-g0A2.png", null, "https://www.mdpi.com/remotesensing/remotesensing-11-00923/article_deploy/html/images/remotesensing-11-00923-g0A3.png", null, "https://www.mdpi.com/remotesensing/remotesensing-11-00923/article_deploy/html/images/remotesensing-11-00923-g0A4.png", null, "https://www.mdpi.com/remotesensing/remotesensing-11-00923/article_deploy/html/images/remotesensing-11-00923-g0A5.png", null, "https://www.mdpi.com/remotesensing/remotesensing-11-00923/article_deploy/html/images/remotesensing-11-00923-g001.png", null, "https://www.mdpi.com/remotesensing/remotesensing-11-00923/article_deploy/html/images/remotesensing-11-00923-g002.png", null, "https://www.mdpi.com/remotesensing/remotesensing-11-00923/article_deploy/html/images/remotesensing-11-00923-g003.png", null, "https://www.mdpi.com/remotesensing/remotesensing-11-00923/article_deploy/html/images/remotesensing-11-00923-g004.png", null, "https://www.mdpi.com/remotesensing/remotesensing-11-00923/article_deploy/html/images/remotesensing-11-00923-g005.png", null, "https://www.mdpi.com/remotesensing/remotesensing-11-00923/article_deploy/html/images/remotesensing-11-00923-g006.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90860873,"math_prob":0.9768937,"size":88636,"snap":"2020-45-2020-50","text_gpt3_token_len":20897,"char_repetition_ratio":0.19162379,"word_repetition_ratio":0.19290832,"special_character_ratio":0.23597635,"punctuation_ratio":0.13709015,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99729323,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T20:57:15Z\",\"WARC-Record-ID\":\"<urn:uuid:326169e0-eb73-464d-8283-78829793d42a>\",\"Content-Length\":\"498854\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38c8ca4b-ba40-4bea-82df-663f33ab3faf>\",\"WARC-Concurrent-To\":\"<urn:uuid:8858de08-9478-41b3-930c-cf450f5b0220>\",\"WARC-IP-Address\":\"104.18.24.151\",\"WARC-Target-URI\":\"https://www.mdpi.com/2072-4292/11/8/923/htm\",\"WARC-Payload-Digest\":\"sha1:OXPESUA6C22KF7PQVI3R5M43RBWWDK2K\",\"WARC-Block-Digest\":\"sha1:WQUZG3INXHTZTIVNQSYPZXKAN7AVJIXC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107874135.2_warc_CC-MAIN-20201020192039-20201020222039-00079.warc.gz\"}"}
https://casaguides.nrao.edu/index.php?title=Uvcontsub&diff=10004&oldid=689
[ "# Difference between revisions of \"Uvcontsub\"\n\n```Help on uvcontsub task:\n\nContinuum fitting and subtraction in the uv plane\n\nA polynomial of the desired order is fit to each baseline/time across the\nspecified channels that define the continuum emission. The data may\nbe averaged in time to increase the signal to noise. This fit\nrepresents a uv-data model of the continuum across all channels.\n\nFor fitmode='subtract', the fitted continuum uv spectrum is\nsubtracted from all channels and the result (presumably only\nline emission) is stored in the CORRECTED_DATA. The\ncontinuum fit is stored in the MODEL_DATA.\n\nFor fitmode='model' the continuum model is stored in the\nMODEL_DATA; but the CORRECTED_DATA is unaffected.\n\nFor fitmode='replace' the continuum model is stored in\nthe CORRECTED_DATA; this is useful to image the continuum model\nresult. You will have to rerun applycal to place the calibrated\ndata back in the CORRECTED_DATA\n\nKeyword arguments:\nvis -- Name of input visibility file\ndefault: none; example: vis='ngc5921.ms'\nfield -- Field selection\ndefault: field = '' means select all fields\nfield = 1 # will get field_id=1 (if you give it an\ninteger, it will retrieve the source with that index.\nfield = '1328+307' specifies source '1328+307'\nfield = '13*' will retrieve '1328+307' and any other fields\nbeginning with '13'\nfitspw -- Selection of spectral windows and channels to use in the\nfit for the continuum, using general spw syntax\ndefault: ''=all; example: fitspw='0:5~30;40~55'\nspw -- Optional per spectral window selection of channels from\nwhich to subtract the continuum.\ndefault: spw='', means subtract from all channels for\neach spw that had a continuum estimated\nAlso, this specifies which spw/chan will be split\nout if splitdata=True.\nsolint -- Timescale for per-baseline fit (units optional)\ndefault: 'int' --> no averaging, fit every integration;\nexample: solint='10s' --> average to 10s before fitting\n10 or '10' --> '10s' (unitless: assumes seconds)\noptions: 'int' --> per integration\n\nKeyword arguments:\nvis -- Name of input visibility file\ndefault: none; example: vis='ngc5921.ms'\nfield -- Field selection\ndefault: field = '' means select all fields\nfield = 1 # will get field_id=1 (if you give it an\ninteger, it will retrieve the source with that index.\nfield = '1328+307' specifies source '1328+307'\nfield = '13*' will retrieve '1328+307' and any other fields\nbeginning with '13'\nfitspw -- Selection of spectral windows and channels to use in the\nfit for the continuum, using general spw syntax\ndefault: ''=all; example: fitspw='0:5~30;40~55'\nspw -- Optional per spectral window selection of channels from\nwhich to subtract the continuum.\ndefault: spw='', means subtract from all channels for\neach spw that had a continuum estimated\nAlso, this specifies which spw/chan will be split\nout if splitdata=True.\nsolint -- Timescale for per-baseline fit (units optional)\ndefault: 'int' --> no averaging, fit every integration;\nexample: solint='10s' --> average to 10s before fitting\n10 or '10' --> '10s' (unitless: assumes seconds)\noptions: 'int' --> per integration\n'inf' --> per scan\nfitorder -- Polynomial order for the fit of the continuum\ndefault: 0 (constant); example: fitorder=1\nfitmode -- Use of the continuum fit model\ndefault: 'subtract'; example: fitmode='replace'\nOptions:\n'subtract'-store fitted continuum model in MODEL and\nsubtract this continuum from data in CORRECTED to\nproduce line-emission in CORRECTED.\n'model'-store fit continuum model in MODEL, but\ndo not change data in CORRECTED.\n'replace'-replace CORRECTED with continuum mode fit.\nsplitdata -- Split out continuum and continuum subtracted line data\ndefault: 'False'; example: splitdata=True\nThe derived continuum data will be placed in: vis.cont\nThe continuum subtracted data will be placed in: vis.contsub\nThe spw/channels selected are given in spw\nasync -- Run task in a separate process (return CASA prompt)\ndefault: False; example: async=True\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76955205,"math_prob":0.47114807,"size":4039,"snap":"2023-14-2023-23","text_gpt3_token_len":958,"char_repetition_ratio":0.14597274,"word_repetition_ratio":0.5448613,"special_character_ratio":0.25427085,"punctuation_ratio":0.116504855,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95051754,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T05:58:50Z\",\"WARC-Record-ID\":\"<urn:uuid:17044388-8910-4144-9956-9aa04485f82b>\",\"Content-Length\":\"19690\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bddba33c-7273-4fb3-a9dd-5f8d795cd5ec>\",\"WARC-Concurrent-To\":\"<urn:uuid:f992aa72-752f-49d9-bd7b-ab4ec2ab093f>\",\"WARC-IP-Address\":\"192.33.115.129\",\"WARC-Target-URI\":\"https://casaguides.nrao.edu/index.php?title=Uvcontsub&diff=10004&oldid=689\",\"WARC-Payload-Digest\":\"sha1:MFT5ONNHHZOX3FINFRKIYKK6IFLVNNCA\",\"WARC-Block-Digest\":\"sha1:T6ZHZ7MZHLFJSIPJLJ4YNGJYQ3UPU25D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653608.76_warc_CC-MAIN-20230607042751-20230607072751-00110.warc.gz\"}"}
https://puzzling.stackexchange.com/questions/87372/mensa-question-need-help/87382
[ "# Mensa question need help\n\nI've tried to solve this question(Exercise 31) in many ways but I couldn't figure out the logic behind this.\n\n$$\\Space{100pt}{0px}{0px}$$", null, "My solution:\n\nD\n\nMy reasoning:\n\nTreat the triples as vectors over a set with elements B (black), W (white) and S (striped). The vectors in the first two rows are \"added\" (with some noncommutative binary operation), the third row is the result. From the first elements in the first column, we get S + S = W. This leaves the solutions B, D and E. B and E can be eliminated because W + S can't be two values at the same time.\n\n• B + B = W or S (row 1 & 2), so why can't W + S be two values at the same time? – zcahfg2 Nov 25 '19 at 19:28\n\n'A'. There are 9 white squares - 3 in the right position, 3 in the middle, 3 in the left spot. There are 10 striped - 4 left, 3 middle, 3 right. There are 5 blacks - 1 left, 2 middle, 2 right.\nOption A would give you:\n6 blacks, 2 at each position.\n9 whites, 3 at each position.\n12 striped, 4 at each position.\n\n• Hi @Nicko, welcome to Puzzling SE! Take the tour if you haven't already! This is a great first answer, and I've reformatted it to include spoiler tags using >! in order to hide your solution. I hope this helps! – HTM Oct 16 '19 at 23:32\n• Thank you, and it does help. I now know how to hide answers and will do so. – Nicko Oct 16 '19 at 23:34\n\nI would say possibly A because then the number of whites are decreasing by -1 if you add the number of white squares per subsequent column from left to right. Also there is another contingent pattern it solves with the sum of blacks per subsequent row from up to down decreasing by -1. No other answer satisfies the first logic anyway. So it is sufficient enough to stand on its own as the sole reason.\n\nMy solution:\n\nD\n\nMy reasoning:\n\nThe patters works both in rows and columns: The first and third figures in the third row and third columns are white & grey. Therefore, whatever the results is, the first and third figure must be the same, which leaves us with options D and F. From the first row, we know that white & grey is grey, which leaves us D.\n\nI need help with finding a pattern among the black squares, what I have found so far:\n\nWhy?\n\nI took the test 3 times in a row giving the exact same answers for problem 1 to 15 (14 correct + 1 wrong) and then gave the answer A, B and D on problem 31 for in each of the tests. A and D gives 88 iq, B gives 92 iq -> B is correct. (double and triple checked this, its B)\n\nA, B or D is the correct answer, why?\n\nThere is a clear pattern on the diagonals leaving B or D (A/B/D on new version) as the only possible correct answer. NOTICE: THIS SPECIFIC TEST IS THE OLD VERSION AND ON THE NEW ONE A is replaced with another 3 black tiled picture. So in the picture over we can deduce that B or D is the correct answer.\n\nOn the diagonals top left to bottom right we have an inversion of stripes + white on the main diagonal (diagonal from top right to bottom left including picture 3, 5 and 7) -> inversion: white becomes striped, striped becomes white. this is if we look past the black tiles.\n\nSo our answer gives white middle square because of picture 1 having a white middle square and striped outer squares because of picture 5 having white outer squares (inverse of the diagonal -> change to striped).\n\nTherefore we know that its (Striped, White, Striped) but in not knowing the black pattern: there is possible black squares which could cover, leaving B or D as the only possible correct answer (A/B/D in new version).\n\nThe only possible solution: the rows give a (3 black, 2 black, 1 black pattern) and columns give (2 black, 3 black, 1 black pattern) -> answer must have one black -> B is the correct answer. {THIS IS VERY ARBITRARY)\n\nI feel like this theory behind the black squares is very arbitrary and might be incorrect. I would love some other explanation behind the black squares.\n\nLast notes:\n\n• there is also a pattern on the opposite diagonal also yielding B or D (A/B/D on new version).\n• if the legit answer of the old version was D, then the puzzle might be just a random solution created to waste time with the logic that someone smart would see there is no pattern and waste little time and someone less smart would spent a lot of time and therefore having almost no time for the last problems.\n\nThis is not a full solution. It is brainstorming that others might be able to use.. if I come up with a solution that meets all requirements I will delete this comment and append the solution to the bottom.\n\nLet's first look at the solution posted by Alexander Fasching.\n\n- Overlay the left column with the center column.\n- Let G be striped (grey), B be black, and W be white.\n- Wherever G and B overlap the result is W\n- Wherever W and G overlap the result is G\n- W and W overlapping is undefined.\n* - But if you take the first horizontal row and overlay it with the middle horizontal row you get B+W=W on column 1 and B+W=G on column 2.\n* - Another break is to work on the columns left to right, where it breaks because on Row 1 B+B=W but on Row 2 B+B=G\n\nTherefore I am thinking there must be another solution..\n\nThat being said, what if we look at Deepthinker101's solution.\n\n- First off, as they pointed out choosing A allows for all small boxes to be grouped into 3 same color sets evenly.\n- 3 sets of whites, 2 sets of black, and 4 sets of greys.\n- The only issue I have is that it seems arbitrary to break all of the groups apart to meet this goal.. but perhaps there's a pattern that explains it.\n\nSo, looking even further into it..\n\n- I found a definite pattern.. - I assigned G as -1, W as 0 and B as 1. I then subtracted C1R1 from C4R2 and arrived at -1 (C9R3). Then I subtracted C1R2 from C4R3 and got C9R1. Then I subtracted C1R3 from C4R1 and got C9R2. You can repeat this system for the entire grid and it works out.. Except it requires a solution of Grey, Grey, Black.\n- As this is not an option it can't be the area, but perhaps the pattern can still be used.\n\nD because horizontally, white + grey -> grey\n\n• Can you expand on this a bit? It's hard to understand what \"white + grey -> grey\" means. – Rand al'Thor Sep 26 '19 at 12:42\n• That does not explain why black+black->white on the first row but black+black->grey on the second row. – Jaap Scherphuis Sep 26 '19 at 12:49\n• b + b can be white or gray. It's fuzzy :D – Markus Sep 26 '19 at 13:09\n• actually despite of the downvote this is the most straightforward answer for D. (And the intended answer is D because my \"IQ\" only grew from 95 to 97 when I chose D on question 31 while using the same pattern of answers on the rest of the questions on test.mensa.no/# hehe – balazs.com Oct 30 '19 at 14:40" ]
[ null, "https://i.stack.imgur.com/3NwMT.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9335597,"math_prob":0.86772674,"size":6578,"snap":"2020-10-2020-16","text_gpt3_token_len":1691,"char_repetition_ratio":0.12062671,"word_repetition_ratio":0.0125,"special_character_ratio":0.26725447,"punctuation_ratio":0.10320285,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9548749,"pos_list":[0,1,2],"im_url_duplicate_count":[null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-31T17:26:49Z\",\"WARC-Record-ID\":\"<urn:uuid:ba54d6d8-d2fd-4670-9563-be8b84ec6fdb>\",\"Content-Length\":\"194266\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d30ad5a4-7d53-451e-b9e0-1135458817e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c1437d4-6b07-4f80-b993-b93d07ccff0b>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://puzzling.stackexchange.com/questions/87372/mensa-question-need-help/87382\",\"WARC-Payload-Digest\":\"sha1:TOVBRPSGM3JSG2UHKUTXDC6JAMWP3UKJ\",\"WARC-Block-Digest\":\"sha1:U5CRBEHJEHKKQ3K73AEEEW72RQV3WXXS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370502513.35_warc_CC-MAIN-20200331150854-20200331180854-00431.warc.gz\"}"}
https://help.scilab.org/docs/6.0.2/pt_BR/CRANI.html
[ "Scilab Home page | Wiki | Bug tracker | Forge | Mailing list archives | ATOMS | File exchange\nChange language to: English - Français - 日本語 - Русский\nAjuda do Scilab >> Xcos > Solvers > Crank-Nicolson 2(3)\n\nCrank-Nicolson 2(3)\n\nCrank-Nicolson is a numerical solver based on the Runge-Kutta scheme providing an efficient and stable implicit method to solve Ordinary Differential Equations (ODEs) Initial Value Problems. Called by xcos.\n\nDescription\n\nCrank-Nicolson is a numerical solver based on the Runge-Kutta scheme providing an efficient and stable fixed-size step method to solve Initial Value Problems of the form:", null, "CVode and IDA use variable-size steps for the integration.\n\nThis makes the computation times unpredictable. Runge-Kutta-based solvers do not adapt to the complexity of the problem, but guarantee a stable computation time.\n\nThis method being implicit, it can be used on stiff problems.\n\nIt is an enhancement of the backward Euler method, which approximates yn+1 by computing f(tn+h, yn+1) and truncating the Taylor expansion.\n\nBy convention, to use fixed-size steps, the program first computes a fitting h that approaches the simulation parameter max step size.\n\nAn important difference of Crank-Nicolson with the previous methods is that it computes up to the second derivative of y, while the others mainly use linear combinations of y and y'.\n\nHere, the next value is determined by the present value yn plus the weighted average of two increments, where each increment is the product of the size of the interval, h, and an estimated slope specified by the function f(t,y).\n\n• k1 is the increment based on the slope at the midpoint of the interval, using yn + a11*h*k1/2 + a12*h*k2/2 ,\n• k2 is the increment based on the slope at the midpoint of the interval, but now using yn .\n\nWe see that the computation of ki requires ki, thus necessitating the use of a nonlinear solver (here, fixed-point iterations).\n\nFirst, we set k0 = h * f(tn, yn) as first guess for both ki, to get updated ki and a first value for yn+1 .\n\nNext, we save and recompute yn+1 with those new ki.\n\nThen, we compare the two yn+1 and recompute it until its difference with the last computed one is inferior to the simulation parameter reltol.\n\nThis process adds a significant computation time to the method, but greatly improves stability.\n\nWhile computing a new k2 only requires one call to the derivative of yn ,thus making an error in O(h2) , k1 requires two calls (one for its initial computation and one for the new one). So in k1, we are approximating y(2)n , thus making an error in O(h3) .\n\nSo the total error is number of steps * O(h3) . And since number of steps = interval size / h by definition, the total error is in O(h2) .\n\nThat error analysis baptized the method Crank-Nicolson 2(3): O(h3) per step, O(h2) in total.\n\nAlthough the solver works fine for max step size up to 10-3 , rounding errors sometimes come into play as we approach 4*10-4 . Indeed, the interval splitting cannot be done properly and we get capricious results.\n\nExamples", null, "", null, "The integral block returns its continuous state, we can evaluate it with Crank-Nicolson by running the example:\n\n// Import the diagram and set the ending time\nimportXcosDiagram(\"SCI/modules/xcos/examples/solvers/ODE_Example.zcos\");\nscs_m.props.tf = 5000;\n\n// Select the solver Crank-Nicolson and set the precision\nscs_m.props.tol(2) = 10^-10;\nscs_m.props.tol(6) = 8;\nscs_m.props.tol(7) = 10^-2;\n\n// Start the timer, launch the simulation and display time\ntic();\ntry xcos_simulate(scs_m, 4); catch disp(lasterror()); end\nt = toc();\ndisp(t, \"Time for Crank-Nicolson:\");\n\nThe Scilab console displays:\n\nTime for Crank-Nicolson:\n8.911\n\n\nNow, in the following script, we compare the time difference between Crank-Nicolson and CVode by running the example with the five solvers in turn: Open the script\n\nTime for BDF / Newton:\n18.894\n\nTime for BDF / Functional:\n18.382\n\n10.368\n\n9.815\n\nTime for Crank-Nicolson:\n6.652\n\n\nThese results show that on a nonstiff problem, for relatively same precision required and forcing the same step size, Crank-Nicolson competes with Adams / Functional.\n\nVariable-size step ODE solvers are not appropriate for deterministic real-time applications because the computational overhead of taking a time step varies over the course of an application.\n\n• LSodar — LSodar (short for Livermore Solver for Ordinary Differential equations, with Automatic method switching for stiff and nonstiff problems, and with Root-finding) is a numerical solver providing an efficient and stable method to solve Ordinary Differential Equations (ODEs) Initial Value Problems.\n• CVode — CVode (short for C-language Variable-coefficients ODE solver) is a numerical solver providing an efficient and stable method to solve Ordinary Differential Equations (ODEs) Initial Value Problems. It uses either BDF or Adams as implicit integration method, and Newton or Functional iterations.\n• IDA — Implicit Differential Algebraic equations system solver, providing an efficient and stable method to solve Differential Algebraic Equations system (DAEs) Initial Value Problems.\n• Runge-Kutta 4(5) — Runge-Kutta is a numerical solver providing an efficient explicit method to solve Ordinary Differential Equations (ODEs) Initial Value Problems.\n• Dormand-Prince 4(5) — Dormand-Prince is a numerical solver providing an efficient explicit method to solve Ordinary Differential Equations (ODEs) Initial Value Problems.\n• Implicit Runge-Kutta 4(5) — Numerical solver providing an efficient and stable implicit method to solve Ordinary Differential Equations (ODEs) Initial Value Problems.\n• DDaskr — Double-precision Differential Algebraic equations system Solver with Krylov method and Rootfinding: numerical solver providing an efficient and stable method to solve Differential Algebraic Equations systems (DAEs) Initial Value Problems\n• Comparisons — This page compares solvers to determine which one is best fitted for the studied problem.\n• ode_discrete — solucionador de equações diferenciais ordinárias, simulação de tempo discreto\n• ode_root — solucionador de EDOs com busca de raízes\n• odedc — solucionador de EDOs contínuas/discretas\n• impl — equações diferenciais algébricas\n\nBibliography\n\nAdvances in Computational Mathematics, Volume 6, Issue 1, 1996, Pages 207-226 A practical method for numerical evaluation of solutions of partial differential equations of the heat-conduction type\n\nSundials Documentation\n\nHistory\n\n Versão Descrição 6.0.0 Crank-Nicolson 2(3) solver added" ]
[ null, "https://help.scilab.org/docs/6.0.2/pt_BR/_LaTeX_5-CrankNicolson.xml_1.png", null, "https://help.scilab.org/docs/6.0.2/pt_BR/ODE_Example.zcos.png", null, "https://help.scilab.org/docs/6.0.2/pt_BR/5-CrankNicolson_1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79263264,"math_prob":0.9458332,"size":6508,"snap":"2019-26-2019-30","text_gpt3_token_len":1529,"char_repetition_ratio":0.13791513,"word_repetition_ratio":0.14257029,"special_character_ratio":0.22111247,"punctuation_ratio":0.10683761,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9930093,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,7,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-26T12:51:09Z\",\"WARC-Record-ID\":\"<urn:uuid:9c56afee-5afd-4db1-b2bf-ea6e137aabf4>\",\"Content-Length\":\"35522\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c23737c0-2799-4134-b2a0-f98c31564425>\",\"WARC-Concurrent-To\":\"<urn:uuid:745c88f1-d780-4496-8ff7-f169983ce9b5>\",\"WARC-IP-Address\":\"176.9.3.186\",\"WARC-Target-URI\":\"https://help.scilab.org/docs/6.0.2/pt_BR/CRANI.html\",\"WARC-Payload-Digest\":\"sha1:OX2IP4HUTQSEL2GIAS6C6TMM2BWZ45I6\",\"WARC-Block-Digest\":\"sha1:5AUJAZC3NY5IYLSIUCC3MYO5DX2HFFVE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560628000306.84_warc_CC-MAIN-20190626114215-20190626140215-00479.warc.gz\"}"}
https://www.w3schools.in/java-tutorial/operators/logical/
[ "# Java Logical Operators\n\nThe Java Logical Operators work on the Boolean operand. It's also called Boolean logical operators. It operates on two Boolean values, which return Boolean values as a result.\n\nOperatorMeaningWork\n&&Logical ANDIf both operands are true then only \"logical AND operator\" evaluate true.\n||Logical ORThe logical OR operator is only evaluated as true when one of its operands evaluates true. If either or both expressions evaluate to true, then the result is true.\n!Logical NotLogical NOT is a Unary Operator, it operates on single operands. It reverses the value of operands, if the value is true, then it gives false, and if it is false, then it gives true.\n\n### Program to Show Logical Operators Works\n\nExample:\n``````public class logicalop {\npublic static void main(String[] args) {\n//Variables Definition and Initialization\nboolean bool1 = true, bool2 = false;\n\n//Logical AND\nSystem.out.println(\"bool1 && bool2 = \" + (bool1 && bool2));\n\n//Logical OR\nSystem.out.println(\"bool1 || bool2 = \" + (bool1 | bool2) );\n\n//Logical Not\nSystem.out.println(\"!(bool1 && bool2) = \" + !(bool1 && bool2));\n\n}\n}``````\nOutput:\n```bool1 && bool2 = false\nbool1 || bool2 = true\n!(bool1 && bool2) = true```", null, "" ]
[ null, "https://www.w3schools.in/wp-content/uploads/up-aerow.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6499974,"math_prob":0.9834223,"size":1174,"snap":"2019-51-2020-05","text_gpt3_token_len":287,"char_repetition_ratio":0.17863248,"word_repetition_ratio":0.010416667,"special_character_ratio":0.28279388,"punctuation_ratio":0.15458937,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96365905,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-09T13:12:53Z\",\"WARC-Record-ID\":\"<urn:uuid:c76ddee6-2dc2-4c3a-8286-5b2fce5c2ef7>\",\"Content-Length\":\"37036\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a23be0d8-98e4-4414-ad87-fd046cca4f2a>\",\"WARC-Concurrent-To\":\"<urn:uuid:da2ec37c-eab2-4776-93bf-4e074ed5179a>\",\"WARC-IP-Address\":\"104.18.44.200\",\"WARC-Target-URI\":\"https://www.w3schools.in/java-tutorial/operators/logical/\",\"WARC-Payload-Digest\":\"sha1:WNA4ZTT3OIYPJ62F6I5ONJIZF5XPZUU2\",\"WARC-Block-Digest\":\"sha1:Y3NTNGJJMFVT7JHJTGC27VCUXDGYK4D7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540518882.71_warc_CC-MAIN-20191209121316-20191209145316-00180.warc.gz\"}"}
https://edu-answer.com/mathematics/question387878
[ "", null, "# What is the place value of the 3 in 0.53?", null, "", null, "", null, "### Another question on Mathematics", null, "Mathematics, 21.06.2019 18:30\nWhich statement justifies why angle ebc measures 90?", null, "Mathematics, 21.06.2019 18:50\nThe trigonometric ratios sine and secant are reciprocals of each other", null, "Mathematics, 21.06.2019 20:00\nSomeone answer asap for ! the total ticket sales for a high school basketball game were \\$2,260. the ticket price for students were \\$2.25 less than the adult ticket price. the number of adult tickets sold was 230, and the number of student tickets sold was 180. what was the price of an adult ticket?", null, "Mathematics, 21.06.2019 21:00\nWhich expression is equal to (21)(7)(3x) using the associative property? (21 · 7 · 3)x (7)(21)(3x) 32(7 + 3x) (3x)(7)(21)\nWhat is the place value of the 3 in 0.53?...\nQuestions", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "" ]
[ null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/User.png", null, "https://edu-answer.com/tpl/images/ask_question.png", null, "https://edu-answer.com/tpl/images/ask_question_mob.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/himiya.png", null, "https://edu-answer.com/tpl/images/cats/en.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/himiya.png", null, "https://edu-answer.com/tpl/images/cats/istoriya.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/en.png", null, "https://edu-answer.com/tpl/images/cats/en.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/en.png", null, "https://edu-answer.com/tpl/images/cats/en.png", null, "https://edu-answer.com/tpl/images/cats/himiya.png", null, "https://edu-answer.com/tpl/images/cats/en.png", null, "https://edu-answer.com/tpl/images/cats/istoriya.png", null, "https://edu-answer.com/tpl/images/cats/obshestvoznanie.png", null, "https://edu-answer.com/tpl/images/cats/ekonomika.png", null, "https://edu-answer.com/tpl/images/cats/istoriya.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7799686,"math_prob":0.96502525,"size":1094,"snap":"2022-27-2022-33","text_gpt3_token_len":412,"char_repetition_ratio":0.23027523,"word_repetition_ratio":0.13043478,"special_character_ratio":0.44332725,"punctuation_ratio":0.26369864,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9655587,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T08:25:54Z\",\"WARC-Record-ID\":\"<urn:uuid:371501e0-126b-401a-a4e0-b3f9c68f3768>\",\"Content-Length\":\"67082\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c5a14f7d-d429-4c68-9aa2-f01144237f2c>\",\"WARC-Concurrent-To\":\"<urn:uuid:c831c1e7-cf85-4f39-b9c5-f84c114c09c1>\",\"WARC-IP-Address\":\"104.21.68.106\",\"WARC-Target-URI\":\"https://edu-answer.com/mathematics/question387878\",\"WARC-Payload-Digest\":\"sha1:PD727J7NHOM3EXKB3AARM7ADNAWDLOLK\",\"WARC-Block-Digest\":\"sha1:2VNYKQPK2AKZUB6RZYTFJNZRYGQFMZEA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572870.85_warc_CC-MAIN-20220817062258-20220817092258-00413.warc.gz\"}"}
https://slideplayer.com/slide/3422176/
[ "", null, "# 1 - 17/04/2015 Department of Chemical Engineering Lecture 4 Kjemisk reaksjonsteknikk Chemical Reaction Engineering  Review of previous lectures  Stoichiometry.\n\n## Presentation on theme: \"1 - 17/04/2015 Department of Chemical Engineering Lecture 4 Kjemisk reaksjonsteknikk Chemical Reaction Engineering  Review of previous lectures  Stoichiometry.\"— Presentation transcript:\n\n1 - 17/04/2015 Department of Chemical Engineering Lecture 4 Kjemisk reaksjonsteknikk Chemical Reaction Engineering  Review of previous lectures  Stoichiometry  Stoichiometric Table  Definitions of Concentration  Calculate the Equilibrium Conversion, X e\n\n2 - 17/04/2015 Department of Chemical Engineering Reactor Mole Balances in terms of conversion ReactorDifferentialAlgebraicIntegral CSTR PFR Batch X t PBR X W 2\n\n3 - 17/04/2015 Department of Chemical Engineering Last Lecture Relative Rates of Reaction 3\n\n4 - 17/04/2015 Department of Chemical Engineering Last Lecture Rate Laws - Power Law Model 4 A reactor follows an elementary rate law if the reaction orders just happens to agree with the stoichiometric coefficients for the reaction as written. e.g. If the above reaction follows an elementary rate law 2nd order in A, 1st order in B, overall third order\n\n5 - 17/04/2015 Department of Chemical Engineering Last Lecture Arrhenius Equation k is the specific reaction rate (constant) and is given by the Arrhenius Equation. Where: k T 5\n\n6 - 17/04/2015 Department of Chemical Engineering These topics build upon one another Mole BalanceRate LawsStoichiometry Reaction Engineering 6\n\n7 - 17/04/2015 Department of Chemical Engineering How to find Step 1: Rate Law Step 2: Stoichiometry Step 3: Combine to get 7\n\n8 - 17/04/2015 Department of Chemical Engineering We shall set up Stoichiometry Tables using species A as our basis of calculation in the following reaction. We will use the stochiometric tables to express the concentration as a function of conversion. We will combine C i = f(X) with the appropriate rate law to obtain -r A = f(X). A is the limiting Reactant. 8\n\n9 - 17/04/2015 Department of Chemical Engineering For every mole of A that react, b/a moles of B react. Therefore moles of B remaining: Let Θ B = N B0 /N A0 Then: 9\n\n10 - 17/04/2015 Department of Chemical Engineering SpeciesSymbolInitialChangeRemaining Batch System Stoichiometry Table BBN B0 =N A0 Θ B -b/aN A0 XN B =N A0 (Θ B -b/aX) AAN A0 -N A0 XN A =N A0 (1-X) InertIN I0 =N A0 Θ I ----------N I =N A0 Θ I F T0 N T =N T0 +δN A0 X Where: and CCN C0 =N A0 Θ C +c/aN A0 XN C =N A0 (Θ C +c/aX) DDN D0 =N A0 Θ D +d/aN A0 XN D =N A0 (Θ D +d/aX) 10 δ = change in total number of mol per mol A reacted\n\n11 - 17/04/2015 Department of Chemical Engineering Constant Volume Batch Note:If the reaction occurs in the liquid phase or if a gas phase reaction occurs in a rigid (e.g. steel) batch reactor Then etc. 11\n\n12 - 17/04/2015 Department of Chemical Engineering Suppose Batch: Stoichiometry 12 Equimolar feed: Stoichiometric feed:\n\n13 - 17/04/2015 Department of Chemical Engineering AAF A0 -F A0 XF A =F A0 (1-X) SpeciesSymbolReactor FeedChangeReactor Effluent BBF B0 =F A0 Θ B -b/aF A0 XF B =F A0 (Θ B -b/aX) Where: Flow System Stochiometric Table 13\n\n14 - 17/04/2015 Department of Chemical Engineering SpeciesSymbolReactor FeedChangeReactor Effluent Where: Flow System Stochiometric Table InertIF I0 = A0 Θ I ----------F I =F A0 Θ I F T0 F T =F T0 +δF A0 X CCF C0 =F A0 Θ C +c/aF A0 XF C =F A0 (Θ C +c/aX) DDF D0 =F A0 Θ D +d/aF A0 XF D =F A0 (Θ D +d/aX) and Concentration – Flow System 14\n\n15 - 17/04/2015 Department of Chemical Engineering SpeciesSymbolReactor FeedChangeReactor Effluent AAF A0 -F A0 XF A =F A0 (1-X) BBF B0 =F A0 Θ B -b/aF A0 XF B =F A0 (Θ B -b/aX) CCF C0 =F A0 Θ C +c/aF A0 XF C =F A0 (Θ C +c/aX) DDF D0 =F A0 Θ D +d/aF A0 XF D =F A0 (Θ D +d/aX) InertIF I0 =F A0 Θ I ----------F I =F A0 Θ I F T0 F T =F T0 +δF A0 X Where:and Concentration – Flow System Flow System Stochiometric Table 15\n\n16 - 17/04/2015 Department of Chemical Engineering Concentration Flow System: Liquid Phase Flow System: Flow Liquid Phase etc. 16 Liquid Systems\n\n17 - 17/04/2015 Department of Chemical Engineering If the rate of reaction were then we would have This gives us 17 Liquid Systems\n\n18 - 17/04/2015 Department of Chemical Engineering For Gas Phase Flow Systems We obtain: Combining the compressibility factor equation of state with Z = Z 0 Stoichiometry: 18\n\n19 - 17/04/2015 Department of Chemical Engineering For Gas Phase Flow Systems 19\n\n20 - 17/04/2015 Department of Chemical Engineering The total molar flow rate is: For Gas Phase Flow Systems Substituting F T gives: 20\n\n21 - 17/04/2015 Department of Chemical Engineering 21 For Gas Phase Flow Systems\n\n22 - 17/04/2015 Department of Chemical Engineering Gas Phase Flow System: Concentration Flow System: 22 For Gas Phase Flow Systems\n\n23 - 17/04/2015 Department of Chemical Engineering For Gas Phase Flow Systems C j =f(F j, T, P) =f(x, T,P)\n\n24 - 17/04/2015 Department of Chemical Engineering If –r A =kC A C B This gives us F A0 /-r A X 24 For Gas Phase Flow Systems\n\n25 - 17/04/2015 Department of Chemical Engineering Consider the following elementary reaction with K C =20 dm 3 /mol and C A0 =0.2 mol/dm 3. Calculate Equilibrium Conversion or both a batch reactor (X eb ) and a flow reactor (X ef ). Example: Calculating the equilibrium conversion for gas phase reaction in a flow reactor, X ef 25\n\n26 - 17/04/2015 Department of Chemical Engineering 26\n\n27 - 17/04/2015 Department of Chemical Engineering Consider the following elementary reaction with K C =20 m 3 /mol and C A0 =0.2 mol/m 3. X e ’ for both a batch reactor and a flow reactor. Calculating the equilibrium conversion for gas phase reaction,X e 27 Batch Reactor Example\n\n28 - 17/04/2015 Department of Chemical Engineering Step 1: Step 2: rate law, Calculate X e 28 Batch Reactor Example\n\n29 - 17/04/2015 Department of Chemical Engineering SymbolInitialChangeRemaining B0½ N A0 XN A0 X/2 AN A0 -N A0 XN A0 (1-X) Totals:N T0 =N A0 N T =N A0 - N A0 X/2 @ equilibrium: -r A =0 29 Calculate X e Batch Reactor Example\n\n30 - 17/04/2015 Department of Chemical Engineering SpeciesInitialChangeRemaining AN A0 -N A0 XN A =N A0 (1-X) B0+N A0 X/2N B =N A0 X/2 N T0 =N A0 N T =N A0 -N A0 X/2 Solution: At equilibrium Stoichiometry Constant volume Batch Calculating the equilibrium conversion for gas phase reaction 30 Batch Reactor Example\n\n31 - 17/04/2015 Department of Chemical Engineering BR Example X eb 31\n\n32 - 17/04/2015 Department of Chemical Engineering Gas Phase Example X ef Rate law: 32 Solution:\n\n33 - 17/04/2015 Department of Chemical Engineering SpeciesFedChangeRemaining AF A0 -F A0 XF A =F A0 (1-X) B0+F A0 X/2F B =F A0 X/2 F T0 =F A0 F T =F A0 -F A0 X/2 Gas Flow Example X ef 33\n\n34 - 17/04/2015 Department of Chemical Engineering AF A0 -F A0 XF A =F A0 (1-X) B0F A0 X/2F B =F A0 X/2 Stoichiometry: Gas isothermal T=T 0, isobaric P=P 0 Gas Flow Example X ef 34\n\n35 - 17/04/2015 Department of Chemical Engineering Pure A  y A0 =1, C A0 =y A0 P 0 /RT 0, C A0 =P 0 /RT 0 @ eq: -r A =0 Gas Flow Example X ef 35\n\n36 - 17/04/2015 Department of Chemical Engineering Gas Flow Example X ef Flow: Recall Batch: 36\n\nDownload ppt \"1 - 17/04/2015 Department of Chemical Engineering Lecture 4 Kjemisk reaksjonsteknikk Chemical Reaction Engineering  Review of previous lectures  Stoichiometry.\"\n\nSimilar presentations" ]
[ null, "https://slideplayer.com/static/blue_design/img/slide-loader4.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69132257,"math_prob":0.43611145,"size":7056,"snap":"2022-27-2022-33","text_gpt3_token_len":2234,"char_repetition_ratio":0.2614861,"word_repetition_ratio":0.22744796,"special_character_ratio":0.33205783,"punctuation_ratio":0.049689442,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925969,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T06:20:46Z\",\"WARC-Record-ID\":\"<urn:uuid:97533295-f406-44f8-99b4-b38fde5d1827>\",\"Content-Length\":\"209052\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:da0227e5-8fe7-4b2b-843f-a27497de0bd5>\",\"WARC-Concurrent-To\":\"<urn:uuid:b6be1fd1-ac11-4cf2-bbfa-4925bb99bf22>\",\"WARC-IP-Address\":\"144.76.166.55\",\"WARC-Target-URI\":\"https://slideplayer.com/slide/3422176/\",\"WARC-Payload-Digest\":\"sha1:3OO7DY634IXZEG6EISFHU3PZLKQJ2MTI\",\"WARC-Block-Digest\":\"sha1:TGVII5VTDTVRVOGRCGUJGOQDEUG6G7MT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571584.72_warc_CC-MAIN-20220812045352-20220812075352-00696.warc.gz\"}"}
https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/Modelica.Blocks.Interfaces.SISO.html
[ "# SISO\n\nSingle Input Single Output continuous control block", null, "# Information\n\nThis information is part of the Modelica Standard Library maintained by the Modelica Association.\n\nBlock has one continuous Real input and one continuous Real output signal.\n\n# Connectors (2)\n\nu y", null, "Type: RealInput Description: Connector of Real input signal", null, "Type: RealOutput Description: Connector of Real output signal\n\n# Extended by (43)", null, "Filter Modelica.Electrical.PowerConverters.ACDC.Control PT1 + all-pass filter", null, "LimitedPI Modelica.Electrical.Machines.Examples.ControlledDCDrives.Utilities Limited PI-controller with anti-windup and feed-forward", null, "RealPassThrough Modelica.Blocks.Routing Pass a Real signal through without modification", null, "VariableDelay Modelica.Blocks.Nonlinear Delay block with variable DelayTime", null, "PadeDelay Modelica.Blocks.Nonlinear Pade approximation of delay block with fixed delayTime (use balance=true; this is not the default to be backwards compatible)", null, "FixedDelay Modelica.Blocks.Nonlinear Delay block with fixed DelayTime", null, "DeadZone Modelica.Blocks.Nonlinear Provide a region of zero output", null, "SlewRateLimiter Modelica.Blocks.Nonlinear Limits the slew rate of a signal", null, "VariableLimiter Modelica.Blocks.Nonlinear Limit the range of a signal with variable limits", null, "Limiter Modelica.Blocks.Nonlinear Limit the range of a signal", null, "TotalHarmonicDistortion Modelica.Blocks.Math Output the total harmonic distortion (THD)", null, "RootMeanSquare Modelica.Blocks.Math Calculate root mean square over period 1/f", null, "RectifiedMean Modelica.Blocks.Math Calculate rectified mean over period 1/f", null, "Mean Modelica.Blocks.Math Calculate mean over period 1/f", null, "WrapAngle Modelica.Blocks.Math Wrap angle to interval ]-pi,pi] or [0,2*pi[", null, "Log10 Modelica.Blocks.Math Output the base 10 logarithm of the input (input > 0 required)", null, "Log Modelica.Blocks.Math Output the logarithm (default base e) of the input (input > 0 required)", null, "Power Modelica.Blocks.Math Output the power to a base of the input", null, "Exp Modelica.Blocks.Math Output the exponential (base e) of the input", null, "Tanh Modelica.Blocks.Math Output the hyperbolic tangent of the input", null, "Cosh Modelica.Blocks.Math Output the hyperbolic cosine of the input", null, "Sinh Modelica.Blocks.Math Output the hyperbolic sine of the input", null, "Atan Modelica.Blocks.Math Output the arc tangent of the input", null, "Acos Modelica.Blocks.Math Output the arc cosine of the input", null, "Asin Modelica.Blocks.Math Output the arc sine of the input", null, "Tan Modelica.Blocks.Math Output the tangent of the input", null, "Cos Modelica.Blocks.Math Output the cosine of the input", null, "Sin Modelica.Blocks.Math Output the sine of the input", null, "Sqrt Modelica.Blocks.Math Output the square root of the input (input >= 0 required)", null, "Sign Modelica.Blocks.Math Output the sign of the input", null, "Abs Modelica.Blocks.Math Output the absolute value of the input", null, "Filter Modelica.Blocks.Continuous Continuous low pass, high pass, band pass or band stop IIR-filter of type CriticalDamping, Bessel, Butterworth or ChebyshevI", null, "CriticalDamping Modelica.Blocks.Continuous Output the input signal filtered with an n-th order filter with critical damping", null, "LowpassButterworth Modelica.Blocks.Continuous Output the input signal filtered with a low pass Butterworth filter of any order", null, "Der Modelica.Blocks.Continuous Derivative of input (= analytic differentiations)", null, "TransferFunction Modelica.Blocks.Continuous Linear transfer function", null, "PID Modelica.Blocks.Continuous PID-controller in additive description form", null, "PI Modelica.Blocks.Continuous Proportional-Integral controller", null, "SecondOrder Modelica.Blocks.Continuous Second order transfer function block (= 2 poles)", null, "FirstOrder Modelica.Blocks.Continuous First order transfer function block (= 1 pole)", null, "Derivative Modelica.Blocks.Continuous Approximated derivative block", null, "LimIntegrator Modelica.Blocks.Continuous Integrator with limited value of the output and optional reset", null, "Integrator Modelica.Blocks.Continuous Output the integral of the input signal with optional reset" ]
[ null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Interfaces.SISO_60x60.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Interfaces.RealInput_16x16.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Interfaces.RealOutput_16x16.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Electrical.PowerConverters.ACDC.Control.Filter_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Electrical.Machines.Examples.ControlledDCDrives.Utilities.LimitedPI_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Routing.RealPassThrough_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Nonlinear.VariableDelay_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Nonlinear.PadeDelay_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Nonlinear.FixedDelay_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Nonlinear.DeadZone_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Nonlinear.SlewRateLimiter_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Nonlinear.VariableLimiter_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Nonlinear.Limiter_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.TotalHarmonicDistortion_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.RootMeanSquare_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.RectifiedMean_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Mean_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.WrapAngle_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Log10_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Log_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Power_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Exp_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Tanh_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Cosh_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Sinh_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Atan_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Acos_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Asin_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Tan_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Cos_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Sin_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Sqrt_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Sign_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Math.Abs_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Continuous.Filter_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Continuous.CriticalDamping_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Continuous.LowpassButterworth_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Continuous.Der_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Continuous.TransferFunction_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Continuous.PID_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Continuous.PI_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Continuous.SecondOrder_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Continuous.FirstOrder_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Continuous.Derivative_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Continuous.LimIntegrator_32x32.png", null, "https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/resources/icons/Modelica.Blocks.Continuous.Integrator_32x32.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60674286,"math_prob":0.71432215,"size":3556,"snap":"2020-24-2020-29","text_gpt3_token_len":834,"char_repetition_ratio":0.31672296,"word_repetition_ratio":0.2008547,"special_character_ratio":0.19178852,"punctuation_ratio":0.16534181,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98152626,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-10T15:13:23Z\",\"WARC-Record-ID\":\"<urn:uuid:354d6424-692f-43c2-af64-2949c883c853>\",\"Content-Length\":\"18014\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:089f1d5b-2828-43ec-80e2-3508686f5439>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc4e8693-67bc-4563-bb03-784da1855b4a>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://doc.modelica.org/Modelica%203.2.3/Resources/helpWSM/Modelica/Modelica.Blocks.Interfaces.SISO.html\",\"WARC-Payload-Digest\":\"sha1:B6B2YB2YQQKXZNRSVNAJQUQKPLDHLEOC\",\"WARC-Block-Digest\":\"sha1:D2664BPHRZH46WRP2SRYOHG4DE66OFGQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655911092.63_warc_CC-MAIN-20200710144305-20200710174305-00352.warc.gz\"}"}