question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I am a little bit confused about the data augmentation performed in PyTorch. Now, as far as I know, when we are performing data augmentation, we are KEEPING our original dataset, and then adding other versions of it (Flipping, Cropping...etc). But that doesn't seem like happening in PyTorch. As far as I understood from the references, when we use data.transforms in PyTorch, then it applies them one by one. So for example: data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } Here , for the training, we are first randomly cropping the image and resizing it to shape (224,224). Then we are taking these (224,224) images and horizontally flipping them. Therefore, our dataset is now containing ONLY the horizontally flipped images, so our original images are lost in this case. Am I right? Is this understanding correct? If not, then where do we tell PyTorch in this code above (taken from Official Documentation) to keep the original images and resize them to the expected shape (224,224)? Thanks
I assume you are asking whether these data augmentation transforms (e.g. RandomHorizontalFlip) actually increase the size of the dataset as well, or are they applied on each item in the dataset one by one and not adding to the size of the dataset. Running the following simple code snippet we could observe that the latter is true, i.e. if you have a dataset of 8 images, and create a PyTorch dataset object for this dataset when you iterate through the dataset, the transformations are called on each data point, and the transformed data point is returned. So for example if you have random flipping, some of the data points are returned as original, some are returned as flipped (e.g. 4 flipped and 4 original). In other words, by one iteration through the dataset items, you get 8 data points(some flipped and some not). [Which is at odds with the conventional understanding of augmenting the dataset(e.g. in this case having 16 data points in the augmented dataset)] from torch.utils.data import Dataset from torchvision import transforms class experimental_dataset(Dataset): def __init__(self, data, transform): self.data = data self.transform = transform def __len__(self): return len(self.data.shape[0]) def __getitem__(self, idx): item = self.data[idx] item = self.transform(item) return item transform = transforms.Compose([ transforms.ToPILImage(), transforms.RandomHorizontalFlip(), transforms.ToTensor() ]) x = torch.rand(8, 1, 2, 2) print(x) dataset = experimental_dataset(x,transform) for item in dataset: print(item) Results: (The little differences in floating points are caused by transforming to pil image and back) Original dummy dataset: tensor([[[[0.1872, 0.5518], [0.5733, 0.6593]]], [[[0.6570, 0.6487], [0.4415, 0.5883]]], [[[0.5682, 0.3294], [0.9346, 0.1243]]], [[[0.1829, 0.5607], [0.3661, 0.6277]]], [[[0.1201, 0.1574], [0.4224, 0.6146]]], [[[0.9301, 0.3369], [0.9210, 0.9616]]], [[[0.8567, 0.2297], [0.1789, 0.8954]]], [[[0.0068, 0.8932], [0.9971, 0.3548]]]]) transformed dataset: tensor([[[0.1843, 0.5490], [0.5725, 0.6588]]]) tensor([[[0.6549, 0.6471], [0.4392, 0.5882]]]) tensor([[[0.5647, 0.3255], [0.9333, 0.1216]]]) tensor([[[0.5569, 0.1804], [0.6275, 0.3647]]]) tensor([[[0.1569, 0.1176], [0.6118, 0.4196]]]) tensor([[[0.9294, 0.3333], [0.9176, 0.9608]]]) tensor([[[0.8549, 0.2275], [0.1765, 0.8941]]]) tensor([[[0.8902, 0.0039], [0.3529, 0.9961]]])
DataSet
51,677,788
100
Say I have large datasets in R and I just want to know whether two of them they are the same. I use this often when I'm experimenting different algorithms to achieve the same result. For example, say we have the following datasets: df1 <- data.frame(num = 1:5, let = letters[1:5]) df2 <- df1 df3 <- data.frame(num = c(1:5, NA), let = letters[1:6]) df4 <- df3 So this is what I do to compare them: table(x == y, useNA = 'ifany') Which works great when the datasets have no NAs: > table(df1 == df2, useNA = 'ifany') TRUE 10 But not so much when they have NAs: > table(df3 == df4, useNA = 'ifany') TRUE <NA> 11 1 In the example, it's easy to dismiss the NA as not a problem since we know that both dataframes are equal. The problem is that NA == <anything> yields NA, so whenever one of the datasets has an NA, it doesn't matter what the other one has on that same position, the result is always going to be NA. So using table() to compare datasets doesn't seem ideal to me. How can I better check if two data frames are identical? P.S.: Note this is not a duplicate of R - comparing several datasets, Comparing 2 datasets in R or Compare datasets in R
Look up all.equal. It has some riders but it might work for you. all.equal(df3,df4) # [1] TRUE all.equal(df2,df1) # [1] TRUE
DataSet
19,119,320
98
I use a DataTable with Information about Users and I want search a user or a list of users in this DataTable. I try it butit don't work :( Here is my c# code: public DataTable GetEntriesBySearch(string username,string location,DataTable table) { list = null; list = table; string expression; string sortOrder; expression = "Nachname = 'test'"; sortOrder = "nachname DESC"; DataRow[] rows = list.Select(expression, sortOrder); list = null; // for testing list = new DataTable(); // for testing foreach (DataRow row in rows) { list.ImportRow(row); } return list; }
You can use DataView. DataView dv = new DataView(yourDatatable); dv.RowFilter = "query"; // query example = "id = 10" http://www.csharp-examples.net/dataview-rowfilter/
DataSet
13,012,585
94
Just having some problems running a simulation on some weather data in Python. The data was supplied in a .tif format, so I used the following code to try to open the image to extract the data into a numpy array. from PIL import Image im = Image.open('jan.tif') But when I run this code I get the following error: PIL.Image.DecompressionBombError: Image size (933120000 pixels) exceeds limit of 178956970 pixels, could be decompression bomb DOS attack. It looks like this is just some kind of protection against this type of attack, but I actually need the data and it is from a reputable source. Is there any way to get around this or do I have to look for another way to do this?
Try PIL.Image.MAX_IMAGE_PIXELS = 933120000 How to find out such a thing? import PIL print(PIL.__file__) # prints, e. g., /usr/lib/python3/dist-packages/PIL/__init__.py Then cd /usr/lib/python3/dist-packages/PIL grep -r -A 2 'exceeds limit' . prints ./Image.py: "Image size (%d pixels) exceeds limit of %d pixels, " ./Image.py- "could be decompression bomb DOS attack." % ./Image.py- (pixels, MAX_IMAGE_PIXELS), Then grep -r MAX_IMAGE_PIXELS . prints ./Image.py:MAX_IMAGE_PIXELS = int(1024 * 1024 * 1024 / 4 / 3) ./Image.py: if MAX_IMAGE_PIXELS is None: ./Image.py: if pixels > MAX_IMAGE_PIXELS: ./Image.py: (pixels, MAX_IMAGE_PIXELS), Then python3 import PIL.Image PIL.Image.MAX_IMAGE_PIXELS = 933120000 Works without complaint and fixes your issue.
DataSet
51,152,059
91
I'm just getting started using ADO.NET and DataSets and DataTables. One problem I'm having is it seems pretty hard to tell what values are in the data table when trying to debug. What are some of the easiest ways of quickly seeing what values have been saved in a DataTable? Is there someway to see the contents in Visual Studio while debugging or is the only option to write the data out to a file? I've created a little utility function that will write a DataTable out to a CSV file. Yet the the resulting CSV file created was cut off. About 3 lines from what should have been the last line in the middle of writing out a System.Guid the file just stops. I can't tell if this is an issue with my CSV conversion method, or the original population of the DataTable. Update Forget the last part I just forgot to flush my stream writer.
With a break point set, after the DataTable or DataSet is populated, you can see a magnifying glass if you hover over the variable. If you click on it, it will bring up the DataTable Visualizer, which you can read about here. In this image you see below, dt is my DataTable variable and the breakpoint was hit a few lines below allowing me to hover over this value. Using Visual Studio 2008. DataTable Visualizer (image credit):
DataSet
1,337,084
90
I have an asp.net application, and now I am using datasets for data manipulation. I recently started to convert this dataset to a List collection. But, in some places it doesn't work. One is that in my old version I am using datarow[] drow = dataset.datatable.select(searchcriteria). But in the List collection there is no method available for finding particular values. Is there any way for me to select some values according with my search criteria? I want to know if this is possible. Please help me.
Well, to start with List<T> does have the FindAll and ConvertAll methods - but the more idiomatic, modern approach is to use LINQ: // Find all the people older than 30 var query1 = list.Where(person => person.Age > 30); // Find each person's name var query2 = list.Select(person => person.Name); You'll need a using directive in your file to make this work: using System.Linq; Note that these don't use strings to express predicates and projects - they use delegates, usually created from lambda expressions as above. If lambda expressions and LINQ are new to you, I would suggest you get a book covering LINQ first, such as LINQ in Action, Pro LINQ, C# 4 in a Nutshell or my own C# in Depth. You certainly can learn LINQ just from web tutorials, but I think it's such an important technology, it's worth taking the time to learn it thoroughly.
DataSet
3,801,748
79
At the moment, when I iterate over the DataRow instances, I do this. foreach(DataRow row in table) return yield new Thingy { Name = row["hazaa"] }; Sooner of later (i.e. sooner), I'll get the table to be missing the column donkey and the poo will hit the fan. After some extensive googling (about 30 seconds) I discovered the following protection syntax. foreach(DataRow row in table) if(row.Table.Columns.Contains("donkey")) return yield new Thingy { Name = row["hazaa"] }; else return null; Now - is this the simplest syntax?! Really? I was expecting a method that gets me the field if it exists or null otherwise. Or at least a Contains method directly on the row. Am I missing something? I'll be mapping in many fields that way so the code will look dreadfully unreadable...
You can create an extension method to make it cleaner: static class DataRowExtensions { public static object GetValue(this DataRow row, string column) { return row.Table.Columns.Contains(column) ? row[column] : null; } } Now call it like below: foreach(DataRow row in table) return yield new Thingy { Name = row.GetValue("hazaa") };
DataSet
18,208,311
77
Here is my c# code Employee objEmp = new Employee(); List<Employee> empList = new List<Employee>(); foreach (DataRow dr in ds.Tables[0].Rows) { empList.Add(new Employee { Name = Convert.ToString(dr["Name"]), Age = Convert.ToInt32(dr["Age"]) }); } It uses a loop to create a List from a dataset.Is there any direct method or shorter method or one line code to convert dataset to list
Try something like this: var empList = ds.Tables[0].AsEnumerable() .Select(dataRow => new Employee { Name = dataRow.Field<string>("Name") }).ToList();
DataSet
17,107,220
76
I was trying to generate a Report using Export to Excell, PDF, TextFile. Well I am doing this in MVC. I have a class which I named SPBatch (which is the exact name of my Stored Procedure in my SQL) and it contains the following: public string BatchNo { get; set; } public string ProviderName { get; set; } public Nullable<System.Int32> NoOfClaims { get; set; } public Nullable<System.Int32> TotalNoOfClaims { get; set; } public Nullable<System.Decimal> TotalBilled { get; set; } public Nullable<System.Decimal> TotalInputtedBill { get; set; } public Nullable<System.DateTime> DateCreated { get; set; } public Nullable<System.DateTime> DateSubmitted { get; set; } public Nullable<System.DateTime> DueDate { get; set; } public string Status { get; set; } public string RefNo { get; set; } public string BatchStatus { get; set; } public string ClaimType { get; set; } as you can see some of my Columns are declared as Nullable. It went smoothly from searching and displaying the results in a table. I have several buttons below which are image buttons for export and every time I try to export in Excel, I always get the problem "DataSet does not support System.Nullable<>" in this part of my code: foreach (MemberInfo mi in miArray) { if (mi.MemberType == MemberTypes.Property) { PropertyInfo pi = mi as PropertyInfo; dt.Columns.Add(pi.Name, pi.PropertyType); //where the error pop's up. } else if (mi.MemberType == MemberTypes.Field) { FieldInfo fi = mi as FieldInfo; dt.Columns.Add(fi.Name, fi.FieldType); } } the error shows up on the one with a comment. Can you help me what to do? I tried adding DBNull in my code but still I get the same error. I tried removing Nullable in my SPBatch but I get an error that some tables are need to be declared as Nullable. What should I do?
try with dt.Columns.Add(pi.Name, Nullable.GetUnderlyingType( pi.PropertyType) ?? pi.PropertyType);
DataSet
23,233,295
71
I am reading an XML file into a DataSet and need to get the data out of the DataSet. Since it is a user-editable config file the fields may or may not be there. To handle missing fields well I'd like to make sure each column in the DataRow exists and is not DBNull. I already check for DBNull but I don't know how to make sure the column exists without having it throw an exception or using a function that loops over all the column names. What is the best method to do this?
DataRow's are nice in the way that they have their underlying table linked to them. With the underlying table you can verify that a specific row has a specific column in it. If DataRow.Table.Columns.Contains("column") Then MsgBox("YAY") End If
DataSet
178,712
62
Is it possible to use any datasets available via the kaggle API in Google Colab? I see the Kaggle API is used in this Colab notebook, but it's a bit unclear to me what datasets it provides access to.
Step-by-step -- Create an API key in Kaggle. To do this, go to kaggle.com/ and open your user settings page. Next, scroll down to the API access section and click generate to download an API key. This will download a file called kaggle.json to your computer. You'll use this file in Colab to access Kaggle datasets and competitions. Navigate to https://colab.research.google.com/. Upload your kaggle.json file using the following snippet in a code cell: from google.colab import files files.upload() Install the kaggle API using !pip install -q kaggle Move the kaggle.json file into ~/.kaggle, which is where the API client expects your token to be located: !mkdir -p ~/.kaggle !cp kaggle.json ~/.kaggle/ Now you can access datasets using the client, e.g., !kaggle datasets list. Here's a complete example notebook of the Colab portion of this process: https://colab.research.google.com/drive/1DofKEdQYaXmDWBzuResXWWvxhLgDeVyl This example shows uploading the kaggle.json file, the Kaggle API client, and using the Kaggle client to download a dataset.
DataSet
49,310,470
61
I am reading some values in data attribute fields. I have seen two easy ways to read the data as shown below: var webappData = document.getElementById('web-app-data'), rating = webappData.dataset.rating; OR var effectData = $('.effects-list li'), creative = effectData.filter('[data-creative]').data("creative"); My question is which of these has better performance or do they really differ? I have a page with many data attributes that I am accessing and I would like to use the method that has the best performance. Any guidance on understanding the difference between the two would be appreciated. While I am looking at performance specifically if there are other reasons to use one over the other I would like to know this as well.
dataset is a native property of an element that contains the data attributes, it's a new(ish) addition and as such is only supported in IE11+, Chrome 8+, FF 6+ etc. A more cross browser solution would be to get the attribute directly webappData.getAttribute('data-rating'); data() is a jQuery method, and other than using the HTML5 data attribute to set the inital value if none exists internally, it has nothing in common with dataset. data() stores whatever data you pass it in an internal object created by jQuery, so this for instance would fail $(element).data('key', 'value'); element.dataset.key // undefined as the data is not stored in the attributes at all, but internally by jQuery. The jQuery equivalent of getting and setting the data attribute would be attr() $(element).attr('data-key', 'value'); The native methods are probably faster, but as they are not really comparable to jQuery's data() it doesn't really matter, but for getting the data attribute I would think the fastest method with the best browser support would be var rating = webappData.getAttribute('data-rating');
DataSet
23,596,751
58
Can someone please help how to get the list of built-in data sets and their dependency packages?
There are several ways to find the included datasets in R: 1: Using data() will give you a list of the datasets of all loaded packages (and not only the ones from the datasets package); the datasets are ordered by package 2: Using data(package = .packages(all.available = TRUE)) will give you a list of all datasets in the available packages on your computer (i.e. also the not-loaded ones) 3: Using data(package = "packagename") will give you the datasets of that specific package, so data(package = "plyr") will give the datasets in the plyr package If you want to know in which package a dataset is located (e.g. the acme dataset), you can do: dat <- as.data.frame(data(package = .packages(all.available = TRUE))$results) dat[dat$Item=="acme", c(1,3,4)] which gives: Package Item Title 107 boot acme Monthly Excess Returns
DataSet
33,797,666
57
What is the most direct route to get a DataSet if I have a sql command? string sqlCommand = "SELECT * FROM TABLE"; string connectionString = "blahblah"; DataSet = GetDataSet(sqlCommand,connectionString); GetDataSet() { //...? } I started with SqlConnection and SqlCommand, but the closest thing I see in the API is SqlCommand.ExecuteReader(). With this method, I'll need to get a SqlDataReader and then convert this to a DataSet manually. I figure there is a more direct route to accomplish the task. If easier, a DataTable will also fit my goal.
public DataSet GetDataSet(string ConnectionString, string SQL) { SqlConnection conn = new SqlConnection(ConnectionString); SqlDataAdapter da = new SqlDataAdapter(); SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = SQL; da.SelectCommand = cmd; DataSet ds = new DataSet(); ///conn.Open(); da.Fill(ds); ///conn.Close(); return ds; }
DataSet
6,584,817
55
I have an Excel worksheet I want to read into a datatable - all is well except for one particular column in my Excel sheet. The column, 'ProductID', is a mix of values like ########## and n#########. I tried to let OleDB handle everything by itself automatically by reading it into a dataset/datatable, but any values in 'ProductID' like n###### are missing, ignored, and left blank. I tried manually creating my DataTable by looping through each row with a datareader, but with the exact same results. Here's the code : // add the column names manually to the datatable as column_1, column_2, ... for (colnum = 0; colnum < num_columns; colnum ++){ ds.Tables["products"].Columns.Add("column_" +colnum , System.Type.GetType("System.String")); } while(myDataReader.Read()){ // loop through each excel row adding a new respective datarow to my datatable DataRow a_row = ds.Tables["products"].NewRow(); for (col = 0; col < num_columns; col ++){ try { a_row[col] = rdr.GetString(col); } catch { a_row[col] = rdr.GetValue(col).ToString(); } } ds.Tables["products"].Rows.Add(a_row); } I don't understand why it won't let me read in values like n######. How can I do this?
Using .Net 4.0 and reading Excel files, I had a similar issue with OleDbDataAdapter - i.e. reading in a mixed data type on a "PartID" column in MS Excel, where a PartID value can be numeric (e.g. 561) or text (e.g. HL4354), even though the excel column was formatted as "Text". From what I can tell, ADO.NET chooses the data type based on the majority of the values in the column (with a tie going to numeric data type). i.e. if most of the PartID's in the sample set are numeric, ADO.NET will declare the column to be numeric. Therefore ADO.Net will attempt to cast each cell to a number, which will fail for the "text" PartID values and not import those "text" PartID's. My solution was to set the OleDbConnection connectionstring to use Extended Properties=IMEX=1;HDR=NO to indicate this is an Import and that the table(s) will not include headers. The excel file has a header row, so in this case tell ado.net not to use it. Then later in the code, remove that header row from the dataset and voilà you have mixed data type for that column. string sql = "SELECT F1, F2, F3, F4, F5 FROM [sheet1$] WHERE F1 IS NOT NULL"; OleDbConnection connection = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + PrmPathExcelFile + @";Extended Properties=""Excel 8.0;IMEX=1;HDR=NO;TypeGuessRows=0;ImportMixedTypes=Text"""); OleDbCommand cmd = new OleDbCommand(sql, connection); OleDbDataAdapter da = new OleDbDataAdapter(cmd); DataSet ds = new DataSet(); ds.Tables.Add("xlsImport", "Excel"); da.Fill(ds, "xlsImport"); // Remove the first row (header row) DataRow rowDel = ds.Tables["xlsImport"].Rows[0]; ds.Tables["xlsImport"].Rows.Remove(rowDel); ds.Tables["xlsImport"].Columns[0].ColumnName = "LocationID"; ds.Tables["xlsImport"].Columns[1].ColumnName = "PartID"; ds.Tables["xlsImport"].Columns[2].ColumnName = "Qty"; ds.Tables["xlsImport"].Columns[3].ColumnName = "UserNotes"; ds.Tables["xlsImport"].Columns[4].ColumnName = "UserID"; connection.Close(); // now you can use LINQ to search the fields var data = ds.Tables["xlsImport"].AsEnumerable(); var query = data.Where(x => x.Field<string>("LocationID") == "COOKCOUNTY").Select(x => new Contact { LocationID= x.Field<string>("LocationID"), PartID = x.Field<string>("PartID"), Quantity = x.Field<string>("Qty"), Notes = x.Field<string>("UserNotes"), UserID = x.Field<string>("UserID") });
DataSet
3,232,281
55
I'm modifying someone else's code where a query is performed using the following: DataSet ds = new DataSet(); SqlDataAdapter da = new SqlDataAdapter(sqlString, sqlConn); da.Fill(ds); How can I tell if the DataSet is empty (i.e. no results were returned)?
If I understand correctly, this should work for you if (ds.Tables[0].Rows.Count == 0) { // }
DataSet
2,976,473
55
Howdy, I have a DataRow pulled out of a DataTable from a DataSet. I am accessing a column that is defined in SQL as a float datatype. I am trying to assign that value to a local variable (c# float datatype) but am getting an InvalidCastExecption DataRow exercise = _exerciseDataSet.Exercise.FindByExerciseID(65); _AccelLimit = (float)exercise["DefaultAccelLimit"]; Now, playing around with this I did make it work but it did not make any sense and it didn't feel right. _AccelLimit = (float)(double)exercise["DefaultAccelLimit"]; Can anyone explain what I am missing here?
A SQL float is a double according to the documentation for SQLDbType.
DataSet
122,523
51
What does the "vs" variable mean in the "mtcars" dataset in R? The helpfile says it means "V/S" but that is not enlightening. Commands: data(mtcars) head(mtcars) ?mtcars
I think it's whether the car has a V engine or a straight engine. I'm basing this on the foot note on the page numered 396 of http://www.mortality.org/INdb/2008/02/12/8/document.pdf
DataSet
18,617,174
49
I have a dataset with the following structure: Classes ‘tbl_df’ and 'data.frame': 10 obs. of 7 variables: $ GdeName : chr "Aeugst am Albis" "Aeugst am Albis" "Aeugst am Albis" "Aeugst am Albis" ... $ Partei : chr "BDP" "CSP" "CVP" "EDU" ... $ Stand1971: num NA NA 4.91 NA 3.21 ... $ Stand1975: num NA NA 5.389 0.438 4.536 ... $ Stand1979: num NA NA 6.2774 0.0195 3.4355 ... $ Stand1983: num NA NA 4.66 1.41 3.76 ... $ Stand1987: num NA NA 3.48 1.65 5.75 ... I want to provide a function which allows to compute the difference between any value, and I would like to do this using dplyrs mutate function like so: (assume the parameters from and to are passed as arguments) from <- "Stand1971" to <- "Stand1987" data %>% mutate(diff = from - to) Of course, this doesn't work, as dplyr uses non-standard evaluation. And I know there's now an elegant solution to the problem using mutate_, and I've read this vignette, but I still can't get my head around it. What to do? Here's the first few rows of the dataset for a reproducible example structure(list(GdeName = c("Aeugst am Albis", "Aeugst am Albis", "Aeugst am Albis", "Aeugst am Albis", "Aeugst am Albis", "Aeugst am Albis", "Aeugst am Albis", "Aeugst am Albis", "Aeugst am Albis", "Aeugst am Albis" ), Partei = c("BDP", "CSP", "CVP", "EDU", "EVP", "FDP", "FGA", "FPS", "GLP", "GPS"), Stand1971 = c(NA, NA, 4.907306434, NA, 3.2109535926, 18.272143463, NA, NA, NA, NA), Stand1975 = c(NA, NA, 5.389079711, 0.4382328556, 4.5363022622, 18.749259742, NA, NA, NA, NA), Stand1979 = c(NA, NA, 6.2773722628, 0.0194647202, 3.4355231144, 25.294403893, NA, NA, NA, 2.7055961071), Stand1983 = c(NA, NA, 4.6609804428, 1.412940467, 3.7563539244, 26.277246489, 0.8529335746, NA, NA, 2.601878177), Stand1987 = c(NA, NA, 3.4767860929, 1.6535933856, 5.7451770193, 22.146844746, NA, 3.7453183521, NA, 13.702211858 )), .Names = c("GdeName", "Partei", "Stand1971", "Stand1975", "Stand1979", "Stand1983", "Stand1987"), class = c("tbl_df", "data.frame" ), row.names = c(NA, -10L))
Using the latest version of dplyr (>=0.7), you can use the rlang !! (bang-bang) operator. library(tidyverse) from <- "Stand1971" to <- "Stand1987" data %>% mutate(diff=(!!as.name(from))-(!!as.name(to))) You just need to convert the strings to names with as.name and then insert them into the expression. Unfortunately I seem to have to use a few more parenthesis than I would like, but the !! operator seems to fall in a weird order-of-operations order. Original answer, dplyr (0.3-<0.7): From that vignette (vignette("nse","dplyr")), use lazyeval's interp() function library(lazyeval) from <- "Stand1971" to <- "Stand1987" data %>% mutate_(diff=interp(~from - to, from=as.name(from), to=as.name(to)))
DataSet
29,678,435
47
I am brand new to LINQ and am trying to query my DataSet with it. So I followed this example to the letter, and it does not work. I know that my DataTable needs the .AsEnumerable on the end, but it is not recognized by the IDE. What am I doing wrong? Am I missing a reference/import that is not shown in the example (wouldn't be the first time a MSDN example was not quite right), and if so, which one? Or is it something else altogether? Sample Code: Imports System Imports System.Linq Imports System.Linq.Expressions Imports System.Collections.Generic Imports System.Data Imports System.Data.SqlClient Imports System.Data.Common Imports System.Globalization //Fill the DataSet. Dim ds As New DataSet() ds.Locale = CultureInfo.InvariantCulture //See the FillDataSet method in the Loading Data Into a DataSet topic. FillDataSet(ds) Dim products As DataTable = ds.Tables("Product") Dim query = From product In products.AsEnumerable() _ Select product Console.WriteLine("Product Names:") For Each p In query Console.WriteLine(p.Field(Of String)("Name")) Next The References in my project are: System System.Data System.Drawing System.Windows.Forms System.Xml
While the class holding the extensions is in the System.Data namespace, it's located in an assembly that isn't added to your project by default. Add a reference to System.Data.DataSetExtensions to your project and it should be ok. Remember that, even after you've added the reference, any class that expects to use the extension methods defined in the class will need to have a using statement for System.Data as well.
DataSet
3,949,302
44
I'm tring to get a string from a DataSet without using GetXml. I'm using WriteXml, instead. How to use it to get a string? Thanks
string result = null; using (TextWriter sw = new StringWriter()) { dataSet.WriteXml(sw); result = sw.ToString(); }
DataSet
963,870
41
I am doing some exercises with datasets like so: List with many dictionaries users = [ {"id": 0, "name": "Ashley"}, {"id": 1, "name": "Ben"}, {"id": 2, "name": "Conrad"}, {"id": 3, "name": "Doug"}, {"id": 4, "name": "Evin"}, {"id": 5, "name": "Florian"}, {"id": 6, "name": "Gerald"} ] Dictionary with few lists users2 = { "id": [0, 1, 2, 3, 4, 5, 6], "name": ["Ashley", "Ben", "Conrad", "Doug","Evin", "Florian", "Gerald"] } Pandas dataframes import pandas as pd pd_users = pd.DataFrame(users) pd_users2 = pd.DataFrame(users2) print pd_users == pd_users2 Questions: Should I structure the datasets like users or like users2? Are there performance differences? Is one more readable than the other? Is there a standard I should follow? I usually convert these to pandas dataframes. When I do that, both versions are identical... right? The output is true for each element so it doesn't matter if I work with panda df's right?
This relates to column oriented databases versus row oriented. Your first example is a row oriented data structure, and the second is column oriented. In the particular case of Python, the first could be made notably more efficient using slots, such that the dictionary of columns doesn't need to be duplicated for every row. Which form works better depends a lot on what you do with the data; for instance, row oriented is natural if you only ever access all of any row. Column oriented meanwhile makes much better use of caches and such when you're searching by a particular field (in Python, this may be reduced by the heavy use of references; types like array can optimize that). Traditional row oriented databases frequently use column oriented sorted indices to speed up lookups, and knowing these techniques you can implement any combination using a key-value store. Pandas does convert both your examples to the same format, but the conversion itself is more expensive for the row oriented structure, simply because every individual dictionary must be read. All of these costs may be marginal. There's a third option not evident in your example: In this case, you only have two columns, one of which is an integer ID in a contiguous range from 0. This can be stored in the order of the entries itself, meaning the entire structure would be found in the list you've called users2['name']; but notably, the entries are incomplete without their position. The list translates into rows using enumerate(). It is common for databases to have this special case also (for instance, sqlite rowid). In general, start with a data structure that keeps your code sensible, and optimize only when you know your use cases and have a measurable performance issue. Tools like Pandas probably means most projects will function just fine without finetuning.
DataSet
30,522,982
40
DataTable dt = ds.Tables[4].AsEnumerable() .Where(x => ((DateTime)x["EndDate"]).Date >= DateTime.Now.Date) .CopyToDataTable(); ds.Tables[4] has rows but it throws the exception "The source contains no DataRows." Any idea how to handle or get rid of this exception?
ds.Tables[4] might have rows, but the result of your LINQ query might not, which is likely where the exception is being thrown. Split your method chaining to use interim parameters so you can be dead certain where the error is occurring. It'll also help you check for existing rows using .Any() before you call CopyToDataTable() and avoid said exception. Something like DataTable dt = null; var rows = ds.Tables[4].AsEnumerable() .Where(x => ((DateTime)x["EndDate"]).Date >= DateTime.Now.Date); if (rows.Any()) dt = rows.CopyToDataTable(); Another option is to use the ImportRow function on a DataTable DataTable dt = ds.Tables[4].Clone(); var rows = ds.Tables[4].AsEnumerable() .Where(x => ((DateTime)x["EndDate"]).Date >= DateTime.Now.Date); foreach (var row in rows) dt.ImportRow(row);
DataSet
28,324,740
39
Here is my dataset: After locking my dataframe by year and grouping by month, I proceed with calculating percentage increase/decrease as a new column; it ends up looking like this: Now for my Plotly plot I use this to display traces and add some hover info: fig.add_trace(go.Scatter(x=group_dfff.Months, y=group_dfff.Amount, name=i, hovertemplate='Price: $%{y:.2f}'+'<br>Week: %{x}')) Now as you can see there is an argument hovertemplate where I can pass my x and y... However, I can't figure out how to include my PERC_CHANGE values in it too. Question: How to include other wanted columns' values inside the hovertemplate? Specifically, How do I include PERC_CHANGE values as I shown desired output below: I solved my specific problem, check pic below (adding 3rd element it is, please see comments), however question remains the same as I do not see how to do this for 4th, 5th and so on elements.
For Plotly Express, you need to use the custom_data argument when you create the figure. For example: fig = px.scatter( data_frame=df, x='ColX', y='ColY', custom_data=['Col1', 'Col2', 'Col3'] ) and then modify it using update_traces and hovertemplate, referencing it as customdata. For example: fig.update_traces( hovertemplate="<br>".join([ "ColX: %{x}", "ColY: %{y}", "Col1: %{customdata[0]}", "Col2: %{customdata[1]}", "Col3: %{customdata[2]}", ]) ) This took a lot of trial and error to figure out, as it isn't well-documented, and the inconsistency between the custom_data and customdata is confusing.
DataSet
59,057,881
38
To whom this may concern, I have searched a considerable amount of time, to work a way out of this error "Deleted row information cannot be accessed through the row" I understand that once a row has been deleted from a datatable that it cannot be accessed in a typical fashion and this is why I am getting this error. The big issue is that I am not sure what to do to get my desired result, which I will outline below. Basically when a row in "dg1" is deleted the row beneath it takes the place of the deleted row (obviously) and thus inherits the deleted rows index. The purpose of this method is to replace and reset the rows index (via grabbing it from the corresponding value in the dataset) that took the deleted rows place and as such the index value. Right now I am just using a label (lblText) to try and get a response from the process, but it crashes when the last nested if statement trys to compare values. Here is the code: void dg1_Click(object sender, EventArgs e) { rowIndex = dg1.CurrentRow.Index; //gets the current rows string value = Convert.ToString(dg1.Rows[rowIndex].Cells[0].Value); if (ds.Tables[0].Rows[rowIndex].RowState.ToString() == "Deleted") { for (int i = 0; i < dg1.Rows.Count; i++) { if (Convert.ToString(ds.Tables[0].Rows[i][0].ToString()) == value) // ^ **where the error is occurring** { lblTest.Text = "Aha!"; //when working, will place index of compared dataset value into rowState, which is displaying the current index of the row I am focussed on in 'dg1' } } } Thanks ahead of time for the help, I really did search, and if it is easy to figure out through a simple google search then allow myself to repeatably hate on me, because I DID try. gc
You can also use the DataSet's AcceptChanges() method to apply the deletes fully. ds.Tables[0].Rows[0].Delete(); ds.AcceptChanges();
DataSet
4,321,840
38
Does anyone know of a Javascript charting library that can handle huge datasets? By 'huge', I mean drawing a line graph with around 1,000 lines and 25,000 data points in total. (With an uneven distribution of points per line. A lot of lines have very few points, but some have up to 4,000.) Here is an example data file. Currently I'm using Highcharts, but it's far too slow at plotting huge datasets. I don't want to use Flash or Silverlight. I was hoping to use Javascript so that my users can zoom+pan around the graph, and turn lines on/off etc. But if this is just too much data for any Javascript charting library to handle, then I'll have to make the graphs server-side.
In their example, the dygraphs library handles six thousand data points in a very fast manner. Perhaps that would be suitable for your needs? It is based on Canvas with excanvas for IE support.
DataSet
5,019,674
36
How can I find out which column and value is violating the constraint? The exception message isn't helpful at all: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints.
Like many people, I have my own standard data access components, which include methods to return a DataSet. Of course, if a ConstraintException is thrown, the DataSet isn't returned to the caller, so the caller can't check for row errors. What I've done is catch and rethrow ConstraintException in such methods, logging row error details, as in the following example (which uses Log4Net for logging): ... try { adapter.Fill(dataTable); // or dataSet } catch (ConstraintException) { LogErrors(dataTable); throw; } ... private static void LogErrors(DataSet dataSet) { foreach (DataTable dataTable in dataSet.Tables) { LogErrors(dataTable); } } private static void LogErrors(DataTable dataTable) { if (!dataTable.HasErrors) return; StringBuilder sb = new StringBuilder(); sb.AppendFormat( CultureInfo.CurrentCulture, "ConstraintException while filling {0}", dataTable.TableName); DataRow[] errorRows = dataTable.GetErrors(); for (int i = 0; (i < MAX_ERRORS_TO_LOG) && (i < errorRows.Length); i++) { sb.AppendLine(); sb.Append(errorRows[i].RowError); } _logger.Error(sb.ToString()); }
DataSet
140,161
36
For quick testing, debugging, creating portable examples, and benchmarking, R has available to it a large number of data sets (in the Base R datasets package). The command library(help="datasets") at the R prompt describes nearly 100 historical datasets, each of which have associated descriptions and metadata. Is there anything like this for Python?
You can use rpy2 package to access all R datasets from Python. Set up the interface: >>> from rpy2.robjects import r, pandas2ri >>> def data(name): ... return pandas2ri.ri2py(r[name]) Then call data() with any dataset's name of the available datasets (just like in R) >>> df = data('iris') >>> df.describe() Sepal.Length Sepal.Width Petal.Length Petal.Width count 150.000000 150.000000 150.000000 150.000000 mean 5.843333 3.057333 3.758000 1.199333 std 0.828066 0.435866 1.765298 0.762238 min 4.300000 2.000000 1.000000 0.100000 25% 5.100000 2.800000 1.600000 0.300000 50% 5.800000 3.000000 4.350000 1.300000 75% 6.400000 3.300000 5.100000 1.800000 max 7.900000 4.400000 6.900000 2.500000 To see a list of the available datasets with a description for each: >>> print(r.data()) Note: rpy2 requires R installation with setting R_HOME variable, and pandas must be installed as well. UPDATE I just created PyDataset, which is a simple module to make loading a dataset from Python as easy as R's (and it does not require R installation, only pandas). To start using it, install the module: $ pip install pydataset Then just load up any dataset you wish (currently around 757 datasets available): from pydataset import data titanic = data('titanic')
DataSet
16,579,407
34
After making some changes to my SQL database structure (using scripts in SQL Server Management Studio), how can I update my DataSet.xsd file in Visual Studio to match the new structure? Must I do this manually? I considered deleting the DataSet and importing a new one, but I'll lose all my custom Table Adapter queries.
From the MSDN Forums: If you right-click the TableAdapter in the Dataset Designer and click 'Configure' the TableAdapter Configuration Wizard opens where you can reconfigure the main query that defines the schema of your table. This should leave the additional TableAdapter queries (the additional methods) that were added after initial configuration. Of course the additional methods will also need to be reconfigured if they reference a column that has been removed from the table on the database, or if any of the column definitions change (for example, if the data type is changed or modified). So if an aditional column was added to the table and it allows Null values the existing methods should still work - just depends on the scope of change to the schema of the table in the database. An addition: Use View > Other Windows > Data Source (Shift + Alt + D)
DataSet
9,497,689
33
Tensorflow seems to lack a reader for ".npy" files. How can I read my data files into the new tensorflow.data.Dataset pipline? My data doesn't fit in memory. Each object is saved in a separate ".npy" file. each file contains 2 different ndarrays as features and a scalar as their label.
It is actually possible to read directly NPY files with TensorFlow instead of TFRecords. The key pieces are tf.data.FixedLengthRecordDataset and tf.io.decode_raw, along with a look at the documentation of the NPY format. For simplicity, let's suppose that a float32 NPY file containing an array with shape (N, K) is given, and you know the number of features K beforehand, as well as the fact that it is a float32 array. An NPY file is just a binary file with a small header and followed by the raw array data (object arrays are different, but we're considering numbers now). In short, you can find the size of this header with a function like this: def npy_header_offset(npy_path): with open(str(npy_path), 'rb') as f: if f.read(6) != b'\x93NUMPY': raise ValueError('Invalid NPY file.') version_major, version_minor = f.read(2) if version_major == 1: header_len_size = 2 elif version_major == 2: header_len_size = 4 else: raise ValueError('Unknown NPY file version {}.{}.'.format(version_major, version_minor)) header_len = sum(b << (8 * i) for i, b in enumerate(f.read(header_len_size))) header = f.read(header_len) if not header.endswith(b'\n'): raise ValueError('Invalid NPY file.') return f.tell() With this you can create a dataset like this: import tensorflow as tf npy_file = 'my_file.npy' num_features = ... dtype = tf.float32 header_offset = npy_header_offset(npy_file) dataset = tf.data.FixedLengthRecordDataset([npy_file], num_features * dtype.size, header_bytes=header_offset) Each element of this dataset contains a long string of bytes representing a single example. You can now decode it to obtain an actual array: dataset = dataset.map(lambda s: tf.io.decode_raw(s, dtype)) The elements will have indeterminate shape, though, because TensorFlow does not keep track of the length of the strings. You can just enforce the shape since you know the number of features: dataset = dataset.map(lambda s: tf.reshape(tf.io.decode_raw(s, dtype), (num_features,))) Similarly, you can choose to perform this step after batching, or combine it in whatever way you feel like. The limitation is that you had to know the number of features in advance. It is possible to extract it from the NumPy header, though, just a bit of a pain, and in any case very hardly from within TensorFlow, so the file names would need to be known in advance. Another limitation is that, as it is, the solution requires you to either use only one file per dataset or files that have the same header size, although if you know that all the arrays have the same size that should actually be the case. Admittedly, if one considers this kind of approach it may just be better to have a pure binary file without headers, and either hard code the number of features or read them from a different source...
DataSet
48,889,482
29
Is there any way to remove a dataset from an hdf5 file, preferably using h5py? Or alternatively, is it possible to overwrite a dataset while keeping the other datasets intact? To my understanding, h5py can read/write hdf5 files in 5 modes f = h5py.File("filename.hdf5",'mode') where mode can be rfor read, r+ for read-write, a for read-write but creates a new file if it doesn't exist, w for write/overwrite, and w- which is same as w but fails if file already exists. I have tried all but none seem to work. Any suggestions are much appreciated.
Yes, this can be done. with h5py.File(input, "a") as f: del f[datasetname] You will need to have the file open in a writeable mode, for example append (as above) or write. As noted by @seppo-enarvi in the comments the purpose of the previously recommended f.__delitem__(datasetname) function is to implement the del operator, so that one can delete a dataset using del f[datasetname]
DataSet
31,861,724
29
I can check for a DBnull on a data row using any of the methods. Either by using if(dr[0][0]==DBNull.Value) //do somethin or by doing if(dr[0][0].ToString().IsNullOrEmpty()) //do something In Both Cases I will be getting same result. But Which one is conecptually right approach. Which was will use less resources
The first way is somewhat correct. However, more accepted way is: if ( dr[0][0] is DBNull ) And the second way is definitely incorrect. If you use the second way, you will get true in two cases: Your value is DBNull Your value is an empty string
DataSet
3,393,958
29
I have a Generic list of Objects. Each object has 9 string properties. I want to turn that list into a dataset that i can pass to a datagridview......Whats the best way to go about doing this?
I apologize for putting an answer up to this question, but I figured it would be the easiest way to view my final code. It includes fixes for nullable types and null values :-) public static DataSet ToDataSet<T>(this IList<T> list) { Type elementType = typeof(T); DataSet ds = new DataSet(); DataTable t = new DataTable(); ds.Tables.Add(t); //add a column to table for each public property on T foreach (var propInfo in elementType.GetProperties()) { Type ColType = Nullable.GetUnderlyingType(propInfo.PropertyType) ?? propInfo.PropertyType; t.Columns.Add(propInfo.Name, ColType); } //go through each property on T and add each value to the table foreach (T item in list) { DataRow row = t.NewRow(); foreach (var propInfo in elementType.GetProperties()) { row[propInfo.Name] = propInfo.GetValue(item, null) ?? DBNull.Value; } t.Rows.Add(row); } return ds; }
DataSet
1,245,662
29
I am using a dataset to insert data being converted from an older database. The requirement is to maintain the current Order_ID numbers. I've tried using: SET IDENTITY_INSERT orders ON; This works when I'm in SqlServer Management Studio, I am able to successfully INSERT INTO orders (order_Id, ...) VALUES ( 1, ...); However, it does not allow me to do it via the dataset insert that I'm using in my conversion script. Which looks basically like this: dsOrders.Insert(oldorderId, ...); I've run the SQL (SET IDENTITY_INSERT orders ON) during the process too. I know that I can only do this against one table at a time and I am. I keep getting this exception: Exception when attempting to insert a value into the orders table System.Data.SqlClient.SqlException: Cannot insert explicit value for identity column in table 'orders' when IDENTITY_INSERT is set to OFF. Any ideas? Update AlexS & AlexKuznetsov have mentioned that Set Identity_Insert is a connection level setting, however, when I look at the SQL in SqlProfiler, I notice several commands. First - SET IDENTITY_INSERT DEAL ON Second - exec sp_reset_connection Third to n - my various sql commands including select & insert's There is always an exec sp_reset_connection between the commands though, I believe that this is responsible for the loss of value on the Identity_Insert setting. Is there a way to stop my dataset from doing the connection reset?
You have the options mixed up: SET IDENTITY_INSERT orders ON will turn ON the ability to insert specific values (that you specify) into a table with an IDENTITY column. SET IDENTITY_INSERT orders OFF Turns that behavior OFF again and the normal behavior (you can't specify values for IDENTITY columns since they are auto-generated) is reinstated. Marc
DataSet
1,234,780
29
I have a dataset of ~2m observations which I need to split into training, validation and test sets in the ratio 60:20:20. A simplified excerpt of my dataset looks like this: +---------+------------+-----------+-----------+ | note_id | subject_id | category | note | +---------+------------+-----------+-----------+ | 1 | 1 | ECG | blah ... | | 2 | 1 | Discharge | blah ... | | 3 | 1 | Nursing | blah ... | | 4 | 2 | Nursing | blah ... | | 5 | 2 | Nursing | blah ... | | 6 | 3 | ECG | blah ... | +---------+------------+-----------+-----------+ There are multiple categories - which are not evenly balanced - so I need to ensure that the training, validation and test sets all have the same proportions of categories as in the original dataset. This part is fine, I can just use StratifiedShuffleSplit from the sklearn library. However, I also need to ensure that the observations from each subject are not split across the training, validation and test datasets. All the observations from a given subject need to be in the same bucket to ensure my trained model has never seen the subject before when it comes to validation/testing. E.g. every observation of subject_id 1 should be in the training set. I can't think of a way to ensure a stratified split by category, prevent contamination (for want of a better word) of subject_id across datasets, ensure a 60:20:20 split and ensure that the dataset is somehow shuffled. Any help would be appreciated! Thanks! EDIT: I've now learnt that grouping by a category and keeping groups together across dataset splits can also be accomplished by sklearn through the GroupShuffleSplit function. So essentially, what I need is a combined stratified and grouped shuffle split i.e. StratifiedGroupShuffleSplit which does not exist. Github issue: https://github.com/scikit-learn/scikit-learn/issues/12076
This is solved in scikit-learn 1.0 with StratifiedGroupKFold In this example you generate 3 folds after shuffling, keeping groups together and does stratification (as much as possible) import numpy as np from sklearn.model_selection import StratifiedGroupKFold X = np.ones((30, 2)) y = np.array([0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1,]) groups = np.array([1, 1, 2, 2, 3, 3, 3, 4, 5, 5, 5, 5, 6, 6, 7, 8, 8, 9, 9, 9, 10, 11, 11, 12, 12, 12, 13, 13, 13, 13]) print("ORIGINAL POSITIVE RATIO:", y.mean()) cv = StratifiedGroupKFold(n_splits=3, shuffle=True) for fold, (train_idxs, test_idxs) in enumerate(cv.split(X, y, groups)): print("Fold :", fold) print("TRAIN POSITIVE RATIO:", y[train_idxs].mean()) print("TEST POSITIVE RATIO :", y[test_idxs].mean()) print("TRAIN GROUPS :", set(groups[train_idxs])) print("TEST GROUPS :", set(groups[test_idxs])) In the output you can see that the ratio of positives cases in the folds stays close to the original positive ratio and that the same group is never in both sets. Of course the fewer/bigger groups you have (i.e., the more imbalanced your classes are) the more difficult will be to stay close to the original classes distribution. Output: ORIGINAL POSITIVE RATIO: 0.5 Fold : 0 TRAIN POSITIVE RATIO: 0.4375 TEST POSITIVE RATIO : 0.5714285714285714 TRAIN GROUPS : {1, 3, 4, 5, 6, 7, 10, 11} TEST GROUPS : {2, 8, 9, 12, 13} Fold : 1 TRAIN POSITIVE RATIO: 0.5 TEST POSITIVE RATIO : 0.5 TRAIN GROUPS : {2, 4, 5, 7, 8, 9, 11, 12, 13} TEST GROUPS : {1, 10, 3, 6} Fold : 2 TRAIN POSITIVE RATIO: 0.5454545454545454 TEST POSITIVE RATIO : 0.375 TRAIN GROUPS : {1, 2, 3, 6, 8, 9, 10, 12, 13} TEST GROUPS : {11, 4, 5, 7}
DataSet
56,872,664
28
I need to read the ''wdbc.data' in the following data folder: http://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/ Doing this in R is easy using command read.csv but as the header is missing how can I add it? I have the information but don't know how to do this and I'd prefer do not edit the data file.
You can do the following: Load the data: test <- read.csv( "http://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data", header=FALSE) Note that the default value of the header argument for read.csv is TRUE so in order to get all lines you need to set it to FALSE. Add names to the different columns in the data.frame names(test) <- c("A","B","C","D","E","F","G","H","I","J","K") or alternative and faster as I understand (not reloading the entire dataset): colnames(test) <- c("A","B","C","D","E","F","G","H","I","J","K")
DataSet
14,021,675
27
Well. I have a DataTable with multiple columns and multiple rows. I want to loop through the DataTable dynamically basically the output should look as follows excluding the braces : Name (DataColumn) Tom (DataRow) Peter (DataRow) Surname (DataColumn) Smith (DataRow) Brown (DataRow) foreach (DataColumn col in rightsTable.Columns) { foreach (DataRow row in rightsTable.Rows) { //output } } I typed that out and noticed this would not work. Can someone please advice on a better way of doing this?
foreach (DataColumn col in rightsTable.Columns) { foreach (DataRow row in rightsTable.Rows) { Console.WriteLine(row[col.ColumnName].ToString()); } }
DataSet
12,198,131
27
How can I create a DataSet that is manually filled? ie. fill through the code or by user input. I want to know the required steps if I need to create a DataTable or a DataRow first, I really don't know the steps to fill the DataSet.
DataSet ds = new DataSet(); DataTable dt = new DataTable("MyTable"); dt.Columns.Add(new DataColumn("id",typeof(int))); dt.Columns.Add(new DataColumn("name", typeof(string))); DataRow dr = dt.NewRow(); dr["id"] = 123; dr["name"] = "John"; dt.Rows.Add(dr); ds.Tables.Add(dt);
DataSet
3,125,864
27
I have a DataSet full of costumers. I was wondering if there is any way to filter the dataset and only get the information I want. For example, to get CostumerName and CostumerAddress for a costumer that has CostumerID = 1 Is it possible?
You can use DataTable.Select: var strExpr = "CostumerID = 1 AND OrderCount > 2"; var strSort = "OrderCount DESC"; // Use the Select method to find all rows matching the filter. foundRows = ds.Table[0].Select(strExpr, strSort); Or you can use DataView: ds.Tables[0].DefaultView.RowFilter = strExpr; UPDATE I'm not sure why you want to have a DataSet returned. But I'd go with the following solution: var dv = ds.Tables[0].DefaultView; dv.RowFilter = strExpr; var newDS = new DataSet(); var newDT = dv.ToTable(); newDS.Tables.Add(newDT);
DataSet
6,007,872
26
I'm trying to add to a new DataSet X a DataTable that is inside of a different DataSet Y. If I add it directly, I get the following error: DataTable already belongs to another DataSet. Do I have to clone the DataTable and import all the rows to it and then add the new DataTable to the new DataSet? Is there a better/easy way to do it?
There are two easy ways to do this: DataTable.Copy Instead of DataTable.Clone, use DataTable.Copy to create a copy of your data table; then insert the copy into the target DataSet: dataSetX.Tables.Add( dataTableFromDataSetY.Copy() ); DataSet.Merge You could also use DataSet.Merge for this: dataSetX.Merge(dataTableFromDataSetY); Note, however, that if you are going to use this method, you might want to make sure that your target DataSet doesn't already contain a table with the same name: If the target DataSet doesn't contain a table by the same name, a fresh copy of the table is created inside the data set; If a table by the same name is already in the target data set, then it will get merged with the one passed to Merge, and you end up with a mix of the two.
DataSet
4,897,080
26
I am trying to output values of each rows from a DataSet: for ($i=0;$i -le $ds.Tables[1].Rows.Count;$i++) { Write-Host 'value is : ' + $i + ' ' + $ds.Tables[1].Rows[$i][0] } gives the output ... value is : +0+ +System.Data.DataSet.Tables[1].Rows[0][0] value is : +1+ +System.Data.DataSet.Tables[1].Rows[1][0] value is : +2+ +System.Data.DataSet.Tables[1].Rows[2][0] value is : +3+ +System.Data.DataSet.Tables[1].Rows[3][0] value is : +4+ +System.Data.DataSet.Tables[1].Rows[4][0] value is : +5+ +System.Data.DataSet.Tables[1].Rows[5][0] value is : +6+ +System.Data.DataSet.Tables[1].Rows[6][0] How do I get the actual value from the column?
The PowerShell string evaluation is calling ToString() on the DataSet. In order to evaluate any properties (or method calls), you have to force evaluation by enclosing the expression in $() for($i=0;$i -lt $ds.Tables[1].Rows.Count;$i++) { write-host "value is : $i $($ds.Tables[1].Rows[$i][0])" } Additionally foreach allows you to iterate through a collection or array without needing to figure out the length. Rewritten (and edited for compile) - foreach ($Row in $ds.Tables[1].Rows) { write-host "value is : $($Row[0])" }
DataSet
804,133
26
I have this: If String.IsNullOrEmpty(editTransactionRow.pay_id.ToString()) = False Then stTransactionPaymentID = editTransactionRow.pay_id 'Check for null value End If Now, when editTransactionRow.pay_id is Null Visual Basic throws an exception. Is there something wrong with this code?
The equivalent of null in VB is Nothing so your check wants to be: If editTransactionRow.pay_id IsNot Nothing Then stTransactionPaymentID = editTransactionRow.pay_id End If Or possibly, if you are actually wanting to check for a SQL null value: If editTransactionRow.pay_id <> DbNull.Value Then ... End If
DataSet
378,225
26
According to this page, it's possible to use TClientDataset as an in-memory dataset, completely independent of any actual databases or files. It describes how to setup the dataset's table structure and how to load data into it at runtime. But when I tried to follow its instructions in D2009, step 4 (table.Open) raised an exception. It said that it didn't have a provider specified. The entire point of the example on that page is to build a dataset that doesn't need a provider. Is the page wrong, is it outdated, or am I missing a step somewhere? And if the page is wrong, what do I need to use instead to create a completely independent in-memory dataset? I've been using TJvMemoryData, but if possible I'd like to reduce the amount of extra dependencies that my dataset adds into my project.
At runtime you can use table.CreateDataset or if this is on a design surface you can right click on the CDS and click create dataset. You need to have specified columns/types for the CDS before you can do this though.
DataSet
274,958
26
Is any place I can download Treebank of English phrases for free or less than $100? I need training data containing bunch of syntactic parsed sentences (>1000) in English in any format. Basically all I need is just words in this sentences being recognized by part of speech.
Here are a couple (English) treebanks available for free: American National Corpus: MASC Questions: QuestionBank and Stanford's corrections British news: BNC TED talks: NAIST-NTT TED Treebank Georgetown University Multilayer Corpus: GUM Biomedical: NaCTeM GENIA treebank Brown GENIA treebank CRAFT corpus See also Wikipedia for a huge list.
DataSet
8,949,517
25
What I want to do I'm trying to use the Microsoft.Office.Interop.Excel namespace to open an Excel file (XSL or CSV, but sadly not XSLX) and import it into a DataSet. I don't have control over the worksheet or column names, so I need to allow for changes to them. What I've tried I've tried the OLEDB method of this in the past, and had a lot of problems with it (buggy, slow, and required prior knowledge of the Excel file's schema), so I want to avoid doing that again. What I'd like to do is use Microsoft.Office.Interop.Excel to import the workbook directly to a DataSet, or loop through the worksheets and load each one into a DataTable. Believe it or not, I've had trouble finding resources for this. A few searches on StackOverflow have found mostly people trying to do the reverse (DataSet => Excel), or the OLEDB technique. Google hasn't been much more helpful. What I've got so far public void Load(string filename, Excel.XlFileFormat format = Excel.XlFileFormat.xlCSV) { app = new Excel.Application(); book = app.Workbooks.Open(Filename: filename, Format: format); DataSet ds = new DataSet(); foreach (Excel.Worksheet sheet in book.Sheets) { DataTable dt = new DataTable(sheet.Name); ds.Tables.Add(dt); //??? Fill dt from sheet } this.Data = ds; } I'm fine with either importing the entire book at once, or looping through one sheet at a time. Can I do this with Interop.Excel?
What about using Excel Data Reader (previously hosted here) an open source project on codeplex? Its works really well for me to export data from excel sheets. The sample code given on the link specified: FileStream stream = File.Open(filePath, FileMode.Open, FileAccess.Read); //1. Reading from a binary Excel file ('97-2003 format; *.xls) IExcelDataReader excelReader = ExcelReaderFactory.CreateBinaryReader(stream); //... //2. Reading from a OpenXml Excel file (2007 format; *.xlsx) IExcelDataReader excelReader = ExcelReaderFactory.CreateOpenXmlReader(stream); //... //3. DataSet - The result of each spreadsheet will be created in the result.Tables DataSet result = excelReader.AsDataSet(); //... //4. DataSet - Create column names from first row excelReader.IsFirstRowAsColumnNames = true; DataSet result = excelReader.AsDataSet(); //5. Data Reader methods while (excelReader.Read()) { //excelReader.GetInt32(0); } //6. Free resources (IExcelDataReader is IDisposable) excelReader.Close(); UPDATE After some search around, I came across this article: Faster MS Excel Reading using Office Interop Assemblies. The article only uses Office Interop Assemblies to read data from a given Excel Sheet. The source code is of the project is there too. I guess this article can be a starting point on what you trying to achieve. See if that helps UPDATE 2 The code below takes an excel workbook and reads all values found, for each excel worksheet inside the excel workbook. private static void TestExcel() { ApplicationClass app = new ApplicationClass(); Workbook book = null; Range range = null; try { app.Visible = false; app.ScreenUpdating = false; app.DisplayAlerts = false; string execPath = Path.GetDirectoryName(Assembly.GetExecutingAssembly().CodeBase); book = app.Workbooks.Open(@"C:\data.xls", Missing.Value, Missing.Value, Missing.Value , Missing.Value, Missing.Value, Missing.Value, Missing.Value , Missing.Value, Missing.Value, Missing.Value, Missing.Value , Missing.Value, Missing.Value, Missing.Value); foreach (Worksheet sheet in book.Worksheets) { Console.WriteLine(@"Values for Sheet "+sheet.Index); // get a range to work with range = sheet.get_Range("A1", Missing.Value); // get the end of values to the right (will stop at the first empty cell) range = range.get_End(XlDirection.xlToRight); // get the end of values toward the bottom, looking in the last column (will stop at first empty cell) range = range.get_End(XlDirection.xlDown); // get the address of the bottom, right cell string downAddress = range.get_Address( false, false, XlReferenceStyle.xlA1, Type.Missing, Type.Missing); // Get the range, then values from a1 range = sheet.get_Range("A1", downAddress); object[,] values = (object[,]) range.Value2; // View the values Console.Write("\t"); Console.WriteLine(); for (int i = 1; i <= values.GetLength(0); i++) { for (int j = 1; j <= values.GetLength(1); j++) { Console.Write("{0}\t", values[i, j]); } Console.WriteLine(); } } } catch (Exception e) { Console.WriteLine(e); } finally { range = null; if (book != null) book.Close(false, Missing.Value, Missing.Value); book = null; if (app != null) app.Quit(); app = null; } } In the above code, values[i, j] is the value that you need to be added to the dataset. i denotes the row, whereas, j denotes the column.
DataSet
7,244,971
25
I'm attempting to use the DataSet designer to create a datatable from a query. I got this down just fine. The query used returns a nullable datetime column from the database. But, when it gets around to this code: DataSet1.DataTable1DataTable table = adapter.GetData(); This throws a StrongTypingException from: [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] public System.DateTime event_start_date { get { try { return ((global::System.DateTime)(this[this.tableDataTable1.event_start_dateColumn])); } catch (global::System.InvalidCastException e) { throw new global::System.Data.StrongTypingException("The value for column \'event_start_date\' in table \'DataTable1\' is DBNull.", e); } } set { this[this.tableDataTable1.event_start_dateColumn] = value; } } How do I use the designer to allow this column to be Nullable?
Typed data sets don't support nullable types. They support nullable columns. The typed data set generator creates non-nullable properties and related methods for handling null values. If you create a MyDate column of type DateTime and AllowDbNull set to true, the DataRow subclass will implement a non-nullable DateTime property named MyDate, a SetMyDateNull() method, and an IsMyDateNull() method. This means that if you want to use a nullable type in your code, you have to do this: DateTime? myDateTime = myRow.IsMyDateNull() ? null : (DateTime?) row.MyDate; While this doesn't totally defeat the purpose of using typed data sets, it really sucks. It's frustrating that typed data sets implement nullable columns in a way that's less usable than the System.Data extension methods, for instance. Is particularly bad because typed data sets do use nullable types in some places - for instance, the Add<TableName>Row() method for the table containing the nullable DateTime column described above will take a DateTime? parameter. Long ago, I asked about this issue on the MSDN forums, and ultimately the ADO project manager explained that nullable types were implemented at the same time as typed data sets, and his team didn't have time to fully integrate the two by .NET 2.0's ship date. And so far as I can tell, they haven't added new features to typed data sets since then.
DataSet
1,638,746
25
I need a solution to export a dataset to an excel file without any asp code (HttpResonpsne...) but i did not find a good example to do this... Best thanks in advance
I've created a class that exports a DataGridView or DataTable to an Excel file. You can probably change it a bit to make it use your DataSet instead (iterating through the DataTables in it). It also does some basic formatting which you could also extend. To use it, simply call ExcelExport, and specify a filename and whether to open the file automatically or not after exporting. I also could have made them extension methods, but I didn't. Feel free to. Note that Excel files can be saved as a glorified XML document and this makes use of that. EDIT: This used to use a vanilla StreamWriter, but as pointed out, things would not be escaped correctly in many cases. Now it uses a XmlWriter, which will do the escaping for you. The ExcelWriter class wraps an XmlWriter. I haven't bothered, but you might want to do a bit more error checking to make sure you can't write cell data before starting a row, and such. The code is below. public class ExcelWriter : IDisposable { private XmlWriter _writer; public enum CellStyle { General, Number, Currency, DateTime, ShortDate }; public void WriteStartDocument() { if (_writer == null) throw new InvalidOperationException("Cannot write after closing."); _writer.WriteProcessingInstruction("mso-application", "progid=\"Excel.Sheet\""); _writer.WriteStartElement("ss", "Workbook", "urn:schemas-microsoft-com:office:spreadsheet"); WriteExcelStyles(); } public void WriteEndDocument() { if (_writer == null) throw new InvalidOperationException("Cannot write after closing."); _writer.WriteEndElement(); } private void WriteExcelStyleElement(CellStyle style) { _writer.WriteStartElement("Style", "urn:schemas-microsoft-com:office:spreadsheet"); _writer.WriteAttributeString("ID", "urn:schemas-microsoft-com:office:spreadsheet", style.ToString()); _writer.WriteEndElement(); } private void WriteExcelStyleElement(CellStyle style, string NumberFormat) { _writer.WriteStartElement("Style", "urn:schemas-microsoft-com:office:spreadsheet"); _writer.WriteAttributeString("ID", "urn:schemas-microsoft-com:office:spreadsheet", style.ToString()); _writer.WriteStartElement("NumberFormat", "urn:schemas-microsoft-com:office:spreadsheet"); _writer.WriteAttributeString("Format", "urn:schemas-microsoft-com:office:spreadsheet", NumberFormat); _writer.WriteEndElement(); _writer.WriteEndElement(); } private void WriteExcelStyles() { _writer.WriteStartElement("Styles", "urn:schemas-microsoft-com:office:spreadsheet"); WriteExcelStyleElement(CellStyle.General); WriteExcelStyleElement(CellStyle.Number, "General Number"); WriteExcelStyleElement(CellStyle.DateTime, "General Date"); WriteExcelStyleElement(CellStyle.Currency, "Currency"); WriteExcelStyleElement(CellStyle.ShortDate, "Short Date"); _writer.WriteEndElement(); } public void WriteStartWorksheet(string name) { if (_writer == null) throw new InvalidOperationException("Cannot write after closing."); _writer.WriteStartElement("Worksheet", "urn:schemas-microsoft-com:office:spreadsheet"); _writer.WriteAttributeString("Name", "urn:schemas-microsoft-com:office:spreadsheet", name); _writer.WriteStartElement("Table", "urn:schemas-microsoft-com:office:spreadsheet"); } public void WriteEndWorksheet() { if (_writer == null) throw new InvalidOperationException("Cannot write after closing."); _writer.WriteEndElement(); _writer.WriteEndElement(); } public ExcelWriter(string outputFileName) { XmlWriterSettings settings = new XmlWriterSettings(); settings.Indent = true; _writer = XmlWriter.Create(outputFileName, settings); } public void Close() { if (_writer == null) throw new InvalidOperationException("Already closed."); _writer.Close(); _writer = null; } public void WriteExcelColumnDefinition(int columnWidth) { if (_writer == null) throw new InvalidOperationException("Cannot write after closing."); _writer.WriteStartElement("Column", "urn:schemas-microsoft-com:office:spreadsheet"); _writer.WriteStartAttribute("Width", "urn:schemas-microsoft-com:office:spreadsheet"); _writer.WriteValue(columnWidth); _writer.WriteEndAttribute(); _writer.WriteEndElement(); } public void WriteExcelUnstyledCell(string value) { if (_writer == null) throw new InvalidOperationException("Cannot write after closing."); _writer.WriteStartElement("Cell", "urn:schemas-microsoft-com:office:spreadsheet"); _writer.WriteStartElement("Data", "urn:schemas-microsoft-com:office:spreadsheet"); _writer.WriteAttributeString("Type", "urn:schemas-microsoft-com:office:spreadsheet", "String"); _writer.WriteValue(value); _writer.WriteEndElement(); _writer.WriteEndElement(); } public void WriteStartRow() { if (_writer == null) throw new InvalidOperationException("Cannot write after closing."); _writer.WriteStartElement("Row", "urn:schemas-microsoft-com:office:spreadsheet"); } public void WriteEndRow() { if (_writer == null) throw new InvalidOperationException("Cannot write after closing."); _writer.WriteEndElement(); } public void WriteExcelStyledCell(object value, CellStyle style) { if (_writer == null) throw new InvalidOperationException("Cannot write after closing."); _writer.WriteStartElement("Cell", "urn:schemas-microsoft-com:office:spreadsheet"); _writer.WriteAttributeString("StyleID", "urn:schemas-microsoft-com:office:spreadsheet", style.ToString()); _writer.WriteStartElement("Data", "urn:schemas-microsoft-com:office:spreadsheet"); switch (style) { case CellStyle.General: _writer.WriteAttributeString("Type", "urn:schemas-microsoft-com:office:spreadsheet", "String"); break; case CellStyle.Number: case CellStyle.Currency: _writer.WriteAttributeString("Type", "urn:schemas-microsoft-com:office:spreadsheet", "Number"); break; case CellStyle.ShortDate: case CellStyle.DateTime: _writer.WriteAttributeString("Type", "urn:schemas-microsoft-com:office:spreadsheet", "DateTime"); break; } _writer.WriteValue(value); // tag += String.Format("{1}\"><ss:Data ss:Type=\"DateTime\">{0:yyyy\\-MM\\-dd\\THH\\:mm\\:ss\\.fff}</ss:Data>", value, _writer.WriteEndElement(); _writer.WriteEndElement(); } public void WriteExcelAutoStyledCell(object value) { if (_writer == null) throw new InvalidOperationException("Cannot write after closing."); //write the <ss:Cell> and <ss:Data> tags for something if (value is Int16 || value is Int32 || value is Int64 || value is SByte || value is UInt16 || value is UInt32 || value is UInt64 || value is Byte) { WriteExcelStyledCell(value, CellStyle.Number); } else if (value is Single || value is Double || value is Decimal) //we'll assume it's a currency { WriteExcelStyledCell(value, CellStyle.Currency); } else if (value is DateTime) { //check if there's no time information and use the appropriate style WriteExcelStyledCell(value, ((DateTime)value).TimeOfDay.CompareTo(new TimeSpan(0, 0, 0, 0, 0)) == 0 ? CellStyle.ShortDate : CellStyle.DateTime); } else { WriteExcelStyledCell(value, CellStyle.General); } } #region IDisposable Members public void Dispose() { if (_writer == null) return; _writer.Close(); _writer = null; } #endregion } Then you can export your DataTable using the following: public static void ExcelExport(DataTable data, String fileName, bool openAfter) { //export a DataTable to Excel DialogResult retry = DialogResult.Retry; while (retry == DialogResult.Retry) { try { using (ExcelWriter writer = new ExcelWriter(fileName)) { writer.WriteStartDocument(); // Write the worksheet contents writer.WriteStartWorksheet("Sheet1"); //Write header row writer.WriteStartRow(); foreach (DataColumn col in data.Columns) writer.WriteExcelUnstyledCell(col.Caption); writer.WriteEndRow(); //write data foreach (DataRow row in data.Rows) { writer.WriteStartRow(); foreach (object o in row.ItemArray) { writer.WriteExcelAutoStyledCell(o); } writer.WriteEndRow(); } // Close up the document writer.WriteEndWorksheet(); writer.WriteEndDocument(); writer.Close(); if (openAfter) OpenFile(fileName); retry = DialogResult.Cancel; } } catch (Exception myException) { retry = MessageBox.Show(myException.Message, "Excel Export", MessageBoxButtons.RetryCancel, MessageBoxIcon.Asterisk); } } }
DataSet
373,925
25
Say I am loading MNIST from torchvision.datasets.MNIST, but I only want to load in 10000 images total, how would I slice the data to limit it to only some number of data points? I understand that the DataLoader is a generator yielding data in the size of the specified batch size, but how do you slice datasets? tr = datasets.MNIST('../data', train=True, download=True, transform=transform) te = datasets.MNIST('../data', train=False, transform=transform) train_loader = DataLoader(tr, batch_size=args.batch_size, shuffle=True, num_workers=4, **kwargs) test_loader = DataLoader(te, batch_size=args.batch_size, shuffle=True, num_workers=4, **kwargs)
You can use torch.utils.data.Subset() e.g. for the first 10,000 elements: import torch.utils.data as data_utils indices = torch.arange(10000) tr_10k = data_utils.Subset(tr, indices)
DataSet
44,856,691
24
I have 10000 BMP images of some handwritten digits. If i want to feed the datas to a neural network what do i need to do ? For MNIST dataset i just had to write (X_train, y_train), (X_test, y_test) = mnist.load_data() I am using Keras library in python . How can i create such dataset ?
You can either write a function that loads all your images and stack them into a numpy array if all fits in RAM or use Keras ImageDataGenerator (https://keras.io/preprocessing/image/) which includes a function flow_from_directory. You can find an example here https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d.
DataSet
39,289,285
24
In C#, I'm trying to loop through my dataset to show data from each row from a specific column. I want the get each date under the column name "TaskStart" and display it on a report, but its just shows the date from the first row for all rows can anybody help? foreach (DataTable table in ds.Tables) { foreach (DataRow dr in table.Rows) { DateTime TaskStart = DateTime.Parse( ds.Tables[0].Rows[0]["TaskStart"].ToString()); TaskStart.ToString("dd-MMMM-yyyy"); rpt.SetParameterValue("TaskStartDate", TaskStart); } }
I believe you intended it more this way: foreach (DataTable table in ds.Tables) { foreach (DataRow dr in table.Rows) { DateTime TaskStart = DateTime.Parse(dr["TaskStart"].ToString()); TaskStart.ToString("dd-MMMM-yyyy"); rpt.SetParameterValue("TaskStartDate", TaskStart); } } You always accessed your first row in your dataset.
DataSet
15,252,223
24
I want to return virtual table from stored procedure and I want to use it in dataset in c# .net. My procedure is a little complex and can't find how to return a table and set it in a dataset Here is my procedure to modify: ALTER PROCEDURE [dbo].[Procedure1] @Start datetime, @Finish datetime, @TimeRange time AS BEGIN SET NOCOUNT ON; declare @TimeRanges as TABLE (SessionStart datetime, SessionEnd datetime); with TimeRanges as ( select @Start as StartTime, @Start + @TimeRange as EndTime union all select StartTime + @TimeRange, EndTime + @TimeRange from TimeRanges where StartTime < @Finish ) select StartTime, EndTime, Count( Test.ScenarioID ) as TotalPeaks from TimeRanges as TR left outer join dbo.Test as Test on TR.StartTime <= Test.SessionStartTime and Test.SessionCloseTime < TR.EndTime group by TR.StartTime, TR.EndTime END
Try this DataSet ds = new DataSet("TimeRanges"); using(SqlConnection conn = new SqlConnection("ConnectionString")) { SqlCommand sqlComm = new SqlCommand("Procedure1", conn); sqlComm.Parameters.AddWithValue("@Start", StartTime); sqlComm.Parameters.AddWithValue("@Finish", FinishTime); sqlComm.Parameters.AddWithValue("@TimeRange", TimeRange); sqlComm.CommandType = CommandType.StoredProcedure; SqlDataAdapter da = new SqlDataAdapter(); da.SelectCommand = sqlComm; da.Fill(ds); }
DataSet
12,973,773
24
I have an RDLC file in which I want to make an expression. Here is the image of properties of expression. I need to concatenate First Name, Last name and Middle Init.
The following examples works for me: =Fields!FirstName.Value & " " & Fields!LastName.Value or ="$ " & Sum(Round((Fields!QTD_ORDER.Value - Fields!QTD_RETURN.Value) * Fields!PRICE.Value,2), "Entity_orderItens") Have a look at MSDN
DataSet
5,552,292
24
I just realized that DBUnit doesn't create tables by itself (see How do I test with DBUnit with plain JDBC and HSQLDB without facing a NoSuchTableException?). Is there any way for DBUnit to automatically create tables from a dataset or dtd? EDIT: For simple testing of an in-memory database like HSQLDB, a crude approach can be used to automatically create tables: private void createHsqldbTables(IDataSet dataSet, Connection connection) throws DataSetException, SQLException { String[] tableNames = dataSet.getTableNames(); String sql = ""; for (String tableName : tableNames) { ITable table = dataSet.getTable(tableName); ITableMetaData metadata = table.getTableMetaData(); Column[] columns = metadata.getColumns(); sql += "create table " + tableName + "( "; boolean first = true; for (Column column : columns) { if (!first) { sql += ", "; } String columnName = column.getColumnName(); String type = resolveType((String) table.getValue(0, columnName)); sql += columnName + " " + type; if (first) { sql += " primary key"; first = false; } } sql += "); "; } PreparedStatement pp = connection.prepareStatement(sql); pp.executeUpdate(); } private String resolveType(String str) { try { if (new Double(str).toString().equals(str)) { return "double"; } if (new Integer(str).toString().equals(str)) { return "int"; } } catch (Exception e) {} return "varchar"; }
Not really. As the answer you linked points out, the dbunit xml files contain data, but not column types. And you really don't want to do this; you risk polluting your database with test artifacts, opening up the possibility that production code will accidentally rely on tables created by the test process. Needing to do this strongly suggests you don't have your db creation and maintenance process adequately defined and scripted.
DataSet
1,531,324
24
I'm trying to insert a column into an existing DataSet using C#. As an example I have a DataSet defined as follows: DataSet ds = new DataSet(); ds.Tables.Add(new DataTable()); ds.Tables[0].Columns.Add("column_1", typeof(string)); ds.Tables[0].Columns.Add("column_2", typeof(int)); ds.Tables[0].Columns.Add("column_4", typeof(string)); later on in my code I am wanting to insert a column between column 2 and column 4. DataSets have methods for adding a column but I can't seem to find the best way in insert one. I'd like to write something like the following... ...Columns.InsertAfter("column_2", "column_3", typeof(string)) The end result should be a data set that has a table with the following columns: column_1 column_2 column_3 column_4 rather than: column_1 column_2 column_4 column_3 which is what the add method gives me surely there must be a way of doing something like this. Edit...Just wanting to clarify what I'm doing with the DataSet based on some of the comments below: I am getting a data set from a stored procedure. I am then having to add additional columns to the data set which is then converted into an Excel document. I do not have control over the data returned by the stored proc so I have to add columns after the fact.
You can use the DataColumn.SetOrdinal() method for this purpose. DataSet ds = new DataSet(); ds.Tables.Add(new DataTable()); ds.Tables[0].Columns.Add("column_1", typeof(string)); ds.Tables[0].Columns.Add("column_2", typeof(int)); ds.Tables[0].Columns.Add("column_4", typeof(string)); ds.Tables[0].Columns.Add("column_3", typeof(string)); //set column 3 to be before column 4 ds.Tables[0].Columns[3].SetOrdinal(2);
DataSet
351,557
24
I've got a DataSet in VisualStudio 2005. I need to change the datatype of a column in one of the datatables from System.Int32 to System.Decimal. When I try to change the datatype in the DataSet Designer I receive the following error: Property value is not valid. Cannot change DataType of a column once it has data. From my understanding, this should be changing the datatype in the schema for the DataSet. I don't see how there can be any data to cause this error. Does any one have any ideas?
I get the same error but only for columns with its DefaultValue set to any value (except the default <DBNull>). So the way I got around this issue was: Column DefaultValue : Type in <DBNull> Save and reopen the dataset
DataSet
47,217
24
For example, I have the following data frame: > dataFrame <- read.csv(file="data.csv") > dataFrame Ozone Solar.R Wind Temp Month Day 1 41 190 7.4 67 5 1 2 36 118 8.0 72 5 2 3 12 149 12.6 74 5 3 4 18 313 11.5 62 5 4 5 NA NA 14.3 56 5 5 6 28 NA 14.9 66 5 6 7 23 299 8.6 65 5 7 8 19 99 13.8 59 5 8 9 8 19 20.1 61 5 9 10 NA 194 8.6 69 5 10 How can I get the nth row ? For example, the 10th 10 NA 194 8.6 69 5 10
You just need to use the square brackets to index your dataframe. A dataframe has two dimensions (rows and columns), so the square brackets will need to contain two pieces of information: row 10, and all columns. You indicate all columns by not putting anything. So your code would be this: dataFrame[10,]
DataSet
34,696,951
23
I have a fraud detection algorithm, and I want to check to see if it works against a real world data set. My algorithm says that a claim is usual or not. Are there any data sets available?
Below are some datasets I found that might be related. Credit fraud German credit fraud dataset: in weka's arff format Email fraud Enron dataset Credit Approval German credit dataset@ UCI Australian credit approval Intrusion Dectection Intrusion Detection kddcup99 dataset
DataSet
14,151,327
23
I'm trying to fill DataSet which contains 2 tables with one to many relationship. I'm using DataReader to achieve this : public DataSet SelectOne(int id) { DataSet result = new DataSet(); using (DbCommand command = Connection.CreateCommand()) { command.CommandText = "select * from table1"; var param = ParametersBuilder.CreateByKey(command, "ID", id, null); command.Parameters.Add(param); Connection.Open(); using (DbDataReader reader = command.ExecuteReader()) { result.MainTable.Load(reader); } Connection.Close(); } return result; } But I've got only one table filled up. How do I achieve my goal - fill both tables? I would like to use DataReader instead DataAdapter, if it possible.
Filling a DataSet with multiple tables can be done by sending multiple requests to the database, or in a faster way: Multiple SELECT statements can be sent to the database server in a single request. The problem here is that the tables generated from the queries have automatic names Table and Table1. However, the generated table names can be mapped to names that should be used in the DataSet. SqlDataAdapter adapter = new SqlDataAdapter( "SELECT * FROM Customers; SELECT * FROM Orders", connection); adapter.TableMappings.Add("Table", "Customer"); adapter.TableMappings.Add("Table1", "Order"); adapter.Fill(ds);
DataSet
11,345,761
23
What is the difference between a dataset and a database ? If they are different then how ? Why is huge data difficult to be manageusing databases today?! Please answer independent of any programming language.
In American English, database usually means "an organized collection of data". A database is usually under the control of a database management system, which is software that, among other things, manages multi-user access to the database. (Usually, but not necessarily. Some simple databases are just text files processed with interpreted languages like awk and Python.) In the SQL world, which is what I'm most familiar with, a database includes things like tables, views, stored procedures, triggers, permissions, and data. Again, in American English, dataset usually refers to data selected and arranged in rows and columns for processing by statistical software. The data might have come from a database, but it might not.
DataSet
7,782,594
23
I want to create a dataset that has the same format as the cifar-10 data set to use with Tensorflow. It should have images and labels. I'd like to be able to take the cifar-10 code but different images and labels, and run that code.
First we need to understand the format in which the CIFAR10 data set is in. If we refer to: https://www.cs.toronto.edu/~kriz/cifar.html, and specifically, the Binary Version section, we see: the first byte is the label of the first image, which is a number in the range 0-9. The next 3072 bytes are the values of the pixels of the image. The first 1024 bytes are the red channel values, the next 1024 the green, and the final 1024 the blue. The values are stored in row-major order, so the first 32 bytes are the red channel values of the first row of the image. Intuitively, we need to store the data in this format. What you can do next as sort of a baseline experiment first, is to get images that are exactly the same size and same number of classes as CIFAR10 and put them in this format. This means that your images should have a size of 32x32x3 and have 10 classes. If you can successfully run this, then you can go further on to factor cases like single channels, different size inputs, and different classes. Doing so would mean that you have to change many variables in the other parts of the code. You have to slowly work your way through. I'm in the midst of working out a general module. My code for this is in https://github.com/jkschin/svhn. If you refer to the svhn_flags.py code, you will see many flags there that can be changed to accommodate your needs. I admit it's cryptic now, as I haven't cleaned it up such that it is readable, but it works. If you are willing to spend some time taking a rough look, you will figure something out. This is probably the easy way to run your own data set on CIFAR10. You could of course just copy the neural network definition and implement your own reader, input format, batching, etc, but if you want it up and running fast, just tune your inputs to fit CIFAR10. EDIT: Some really really basic code that I hope would help. from PIL import Image import numpy as np im = Image.open('images.jpeg') im = (np.array(im)) r = im[:,:,0].flatten() g = im[:,:,1].flatten() b = im[:,:,2].flatten() label = [1] out = np.array(list(label) + list(r) + list(g) + list(b),np.uint8) out.tofile("out.bin") This would convert an image into a byte file that is ready for use in CIFAR10. For multiple images, just keep concatenating the arrays, as stated in the format above. To check if your format is correct, specifically for the Asker's use case, you should get a file size of 4274273 + 1 = 546988 bytes. Assuming your pictures are RGB and values range from 0-255. Once you verify that, you're all set to run in TensorFlow. Do use TensorBoard to perhaps visualize one image, just to guarantee correctness. EDIT 2: As per Asker's question in comments, if not eval_data: filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i) for i in xrange(1, 6)] If you really wanna it to work as it is, you need to study the function calls of CIFAR10 code. In cifar10_input, the batches are hardcoded. So you have to edit this line of code to fit the name of the bin file. Or, just distribute your images into 6 bin files evenly.
DataSet
35,032,675
22
We're often working on a project where we've been handed a large data set (say, a handful of files that are 1GB each), and are writing code to analyze it. All of the analysis code is in Git, so everybody can check changes in and out of our central repository. But what to do with the data sets that the code is working with? I want the data in the repository: When users first clone the repository, the data should come with. The data isn't 100% read-only; now and then a data point is corrected, or a minor formatting change happens. If minor changes happen to the data, users should be notified at the next checkout. However, I don't want the data in the git repository: git cloning a spare copy (so I have two versions in my home directory) will pull a few GB of data I already have. I'd rather either have it in a fixed location [set a rule that data must be in ~/data] or add links as needed. With data in the repository, copying to a thumb drive may be impossible, which is annoying when I'm just working on a hundred lines of code. If an erroneous data point is fixed, I'm never going to look at the erroneous version again. Changes to the data set can be tracked in a plain text file or by the person who provided the data (or just not at all). It seems that I need a setup with a main repository for code and an auxiliary repository for data. Any suggestions or tricks for gracefully implementing this, either within git or in POSIX at large? Everything I've thought of is in one way or another a kludge.
use submodules to isolate your giant files from your source code. More on that here: http://git-scm.com/book/en/v2/Git-Tools-Submodules The examples talk about libraries, but this works for large bloated things like data samples for testing, images, movies, etc. You should be able to fly while developing, only pausing here and there if you need to look at new versions of giant data. Sometimes it's not even worth while tracking changes to such things. To address your issues with getting more clones of the data: If your git implementation supports hard links on your OS, this should be a breeze. The nature of your giant dataset is also at play. If you change some of it, are you changing giant blobs or a few rows in a set of millions? This should determine how effective VCS will be in playing a notification mechanism for it. Hope this helps.
DataSet
6,268,628
22
I am trying to load a dataset into R using the data() function. It works fine when I use the dataset name (e.g. data(Titanic) or data("Titanic")). What doesn't work for me is loading a dataset using a variable instead of its name. For example: # This works fine: > data(Titanic) # This works fine as well: > data("Titanic") # This doesn't work: > myvar <- Titanic > data(myvar) **Warning message: In data(myvar) : data set ‘myvar’ not found** Why is R looking for a dataset named "myvar" since it is not quoted? And since this is the default behavior, isn't there a way to load a dataset stored in a variable? For the record, what I am trying to do is to create a function that uses the "arules" package and mines association rules using Apriori. Thus, I need to pass the dataset as a parameter to that function. myfun <- function(mydataset) { data(mydataset) # doesn't work (data set 'mydataset' not found) rules <- apriori(mydataset) } edit - output of sessionInfo(): > sessionInfo() R version 3.0.0 (2013-04-03) Platform: i386-w64-mingw32/i386 (32-bit) locale: [1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 [3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C [5] LC_TIME=English_United States.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] arules_1.0-14 Matrix_1.0-12 lattice_0.20-15 RPostgreSQL_0.4 DBI_0.2-7 loaded via a namespace (and not attached): [1] grid_3.0.0 tools_3.0.0 And the actual errors I am getting (using, for example, a sample dataset "xyz"): xyz <- data.frame(c(1,2,3)) data(list=xyz) Warning messages: 1: In grep(name, files, fixed = TRUE) : argument 'pattern' has length > 1 and only the first element will be used 2: In grep(name, files, fixed = TRUE) : argument 'pattern' has length > 1 and only the first element will be used 3: In if (name %in% names(rds)) { : the condition has length > 1 and only the first element will be used 4: In grep(name, files, fixed = TRUE) : argument 'pattern' has length > 1 and only the first element will be used 5: In if (name %in% names(rds)) { : the condition has length > 1 and only the first element will be used 6: In grep(name, files, fixed = TRUE) : argument 'pattern' has length > 1 and only the first element will be used ... ... 32: In data(list = xyz) : c("data set ‘1’ not found", "data set ‘2’ not found", "data set ‘3’ not found")
Use the list argument. See ?data. data(list=myvar) You'll also need myvar to be a character string. myvar <- "Titanic" Note that myvar <- Titanic only worked (I think) because of the lazy loading of the Titanic data set. Most datasets in packages are loaded this way, but for other kinds of data sets, you'd still need the data command.
DataSet
19,912,833
21
I've found myself increasingly unsatisfied with the DataSet/DataTable/DataRow paradigm in .Net, mostly because it's often a couple of steps more complicated than what I really want to do. In cases where I'm binding to controls, DataSets are fine. But in other cases, there seems to be a fair amount of mental overhead. I've played a bit with SqlDataReader, and that seems to be good for simple jaunts through a select, but I feel like there may be some other models lurking in .Net that are useful to learn more about. I feel like all of the help I find on this just uses DataSet by default. Maybe that and DataReader really are the best options. I'm not looking for a best/worst breakdown, just curious what my options are and what experiences you've had with them. Thanks! -Eric Sipple
Since .NET 3.5 came out, I've exclusively used LINQ. It's really that good; I don't see any reason to use any of those old crutches any more. As great as LINQ is, though, I think any ORM system would allow you to do away with that dreck.
DataSet
18,533
21
Reading Interactive Analysis of Web-Scale Datasets paper, I bumped into the concept of repetition and definition level. while I understand the need for these two, to be able to disambiguate occurrences, it attaches a repetition and definition level to each value. What is unclear to me is how they computed the levels... As illustrated in picture: It says: Consider field Code in Figure 2. It occurs three times in r1. Occurrences ‘en-us’ and ‘en’ are inside the first Name, while ’en-gb’ is in the third Name. To disambiguate these occurrences, we attach a repetition level to each value. It tells us at what repeated field in the field’s path the value has repeated. The field path Name.Language.Code contains two repeated fields, Name and Language. Hence, the repetition level of Code ranges between 0 and 2; level 0 denotes the start of a new record. Now suppose we are scanning record r1 top down. When we encounter ‘en-us’, we have not seen any repeated fields, i.e., the repetition level is 0. When we see ‘en’, field Language has repeated, so the repetitionlevelis2. I just can't get me head around it, Name.Language.Code in r1 has en-us and en values. While is the first one r = 0 and the second one r = 2 is it because two definitions were repeated ? (language and code) ? If it was: Name Language Code: en-us Name Language Code: en Name Language Code: en-gb Would it be ? 0 2 1 2 2 2 Definition levels. Each value of a field with path p, esp. every NULL, has a definition level specifying how many fields in p that could be undefined (because they are optional or repeated) are actually present in record. Why is then the definition level is 2 ? Isn't the path Name.Language contain two fields Code and Country where only 1 is optional\repeated ?
The Dremel striping algorithm is by no means trivial. To answer your first question: The repetition level of en-us is 0 since it is the first occurrence of a name.language.code path within the record. The repetition level of en is 2, since the repetition occurred at level 2 (the language tag). To answer your second question, for the following record, DocId: 20 Name Language Code: en-us Name Language Code: en Name Language Code: en-gb the entries for name.language.code would be en-us 0 2 en 1 2 en-gb 1 2 Explanation: The definition level is always two, since the two optional tags name and language are present. The repetition level for en-us is zero, since it is the first name.language.code within the record. The repetition level for en and en-gb is 1, since the repetition occurred at the name tag (level 1).
DataSet
43,568,132
20
I have an RDD[LabeledPoint] intended to be used within a machine learning pipeline. How do we convert that RDD to a DataSet? Note the newer spark.ml apis require inputs in the Dataset format.
Here is an answer that traverses an extra step - the DataFrame. We use the SQLContext to create a DataFrame and then create a DataSet using the desired object type - in this case a LabeledPoint: val sqlContext = new SQLContext(sc) val pointsTrainDf = sqlContext.createDataFrame(training) val pointsTrainDs = pointsTrainDf.as[LabeledPoint] Update Ever heard of a SparkSession ? (neither had I until now..) So apparently the SparkSession is the Preferred Way (TM) in Spark 2.0.0 and moving forward. Here is the updated code for the new (spark) world order: Spark 2.0.0+ approaches Notice in both of the below approaches (simpler one of which credit @zero323) we have accomplished an important savings as compared to the SQLContext approach: no longer is it necessary to first create a DataFrame. val sparkSession = SparkSession.builder().getOrCreate() val pointsTrainDf = sparkSession.createDataset(training) val model = new LogisticRegression() .train(pointsTrainDs.as[LabeledPoint]) Second way for Spark 2.0.0+ Credit to @zero323 val spark: org.apache.spark.sql.SparkSession = ??? import spark.implicits._ val trainDs = training.toDS() Traditional Spark 1.X and earlier approach val sqlContext = new SQLContext(sc) // Note this is *deprecated* in 2.0.0 import sqlContext.implicits._ val training = splits(0).cache() val test = splits(1) val trainDs = training**.toDS()** See also: How to store custom objects in Dataset? by the esteemed @zero323 .
DataSet
37,513,667
20
There are a lot of examples online of how to fill a DataSet from a text file but I want to do the reverse. The only thing I've been able to find is this but it seems... incomplete? I want it to be in a readable format, not just comma delimited, so non-equal spacing between columns on each row if that makes sense. Here is an example of what I mean: Column1 Column2 Column3 Some info Some more info Even more info Some stuff here Some more stuff Even more stuff Bits and bobs Note: I only have one DataTable within my DataSet so no need to worry about multiple DataTables. EDIT: When I said "readable" I meant human-readable. Thanks in advance.
This should space out fixed length font text nicely, but it does mean it will process the full DataTable twice (pass 1: find longest text per column, pass 2: output text): static void Write(DataTable dt, string outputFilePath) { int[] maxLengths = new int[dt.Columns.Count]; for (int i = 0; i < dt.Columns.Count; i++) { maxLengths[i] = dt.Columns[i].ColumnName.Length; foreach (DataRow row in dt.Rows) { if (!row.IsNull(i)) { int length = row[i].ToString().Length; if (length > maxLengths[i]) { maxLengths[i] = length; } } } } using (StreamWriter sw = new StreamWriter(outputFilePath, false)) { for (int i = 0; i < dt.Columns.Count; i++) { sw.Write(dt.Columns[i].ColumnName.PadRight(maxLengths[i] + 2)); } sw.WriteLine(); foreach (DataRow row in dt.Rows) { for (int i = 0; i < dt.Columns.Count; i++) { if (!row.IsNull(i)) { sw.Write(row[i].ToString().PadRight(maxLengths[i] + 2)); } else { sw.Write(new string(' ', maxLengths[i] + 2)); } } sw.WriteLine(); } sw.Close(); } }
DataSet
7,174,077
20
The RODBC documentation suggests it is possible, but I am not sure how to read data from a Microsoft Access (the new .accdb format) file with this package into R (on Debian GNU/Linux). The vignette talks about drivers, but I do not quite understand how I can see which drivers are installed, and in particular, if I have a driver installed for me to access those .accdb files. What code do you use to read data from .accdb files? And please indicate what platform you are on and if you had to install a special driver.
To import a post-2007 Microsoft Access file (.accdb) into R, you can use the RODBC package. For an .accdb file called "foo.accdb" with the following tables, "bar" and "bin", stored on the desktop of John Doe's computer: library(RODBC) #loads the RODBC package dta <- odbcConnectAccess2007("C:/Users/JohnDoe/Desktop/foo.accdb") #specifies the file path df1 <- sqlFetch(dta, "bar") #loads the table called 'bar' in the original Access file df2 <- sqlFetch(dta, "bin") #loads the table called 'bin' in the original Access file
DataSet
7,109,844
20
I download and clip some youtube videos with pytube but some videos are not downloading and asking for age verification. How can I solve this? Thanks for your advice
For pytube 15.0.0 I had the AgeRestrictedError in streams contents even using the use_oauth option. I fixed the problem only replacing ANDROID_MUSIC with ANDROID as "client" at line 223 of innertube.py: def __init__(self, client='ANDROID_MUSIC', use_oauth=False, allow_cache=True): def __init__(self, client='ANDROID', use_oauth=False, allow_cache=True):
DataSet
75,791,765
19
I'm using Visual Studio 2008 with C#. I have a .xsd file and it has a table adapter. I want to change the table adapter's command timeout. Thanks for your help.
With some small modifications csl's idea works great. partial class FooTableAdapter { /** * <summary> * Set timeout in seconds for Select statements. * </summary> */ public int SelectCommandTimeout { set { for (int i = 0; i < this.CommandCollection.Length; i++) if (this.CommandCollection[i] != null) this.CommandCollection[i].CommandTimeout = value; } } } To use it, just set this.FooTableAdapter.CommandTimeout = 60; somewhere before the this.FooTableAdapter.Fill(); If you need to change the timeout on a lot of table adapters, you could create a generic extension method and have it use reflection to change the timeout. /// <summary> /// Set the Select command timeout for a Table Adapter /// </summary> public static void TableAdapterCommandTimeout<T>(this T TableAdapter, int CommandTimeout) where T : global::System.ComponentModel.Component { foreach (var c in typeof(T).GetProperty("CommandCollection", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.GetProperty | System.Reflection.BindingFlags.Instance).GetValue(TableAdapter, null) as System.Data.SqlClient.SqlCommand[]) c.CommandTimeout = CommandTimeout; } Usage: this.FooTableAdapter.TableAdapterCommandTimeout(60); this.FooTableAdapter.Fill(...); This is a little slower. And there is the possibility of an error if you use it on the wrong type of object. (As far as I know, there is no "TableAdapter" class that you could limit it to.)
DataSet
1,192,171
19
I am desperately trying to download the Ta-Feng grocery dataset for few days but appears that all links are broken. I needed for data mining / machine learning research for my msc thesis. I also have the Microsoft grocery database, the Belgian store and Supermarket.arff from Weka. However in the research they say Ta Feng is largest and most interesting from all public available data sets. http://recsyswiki.com/wiki/Grocery_shopping_datasets I will be super thankful for any help :) Cheers!
The person that down voted doesn't understand the difficulty to find this valuable piece of information for machine learning related to supermarket scenarios. It is the biggest publicly available dataset containing 4 months of shopping transactions of the Ta-Feng supermarket. I got it from Prof. Chun Nan who was very kind to send it to me because the servers of his previous institute in Taiwan were not supporting it anymore. Here is a link for everybody that needs it: https://sites.google.com/site/dataminingcourse2009/spring2016/annoucement2016/assignment3/D11-02.ZIP
DataSet
25,014,904
18
I would like to read the contents of a CSV file and create a dataset. I am trying like this: var lines = File.ReadAllLines("test.csv").Select(a => a.Split(';')); DataSet ds = new DataSet(); ds.load(lines); but apparently this is not correct.
You need to add the reference Microsoft.VisualBasic.dll to use TextFieldParser Class. private static DataTable GetDataTabletFromCSVFile(string csv_file_path) { DataTable csvData = new DataTable(); try { using(TextFieldParser csvReader = new TextFieldParser(csv_file_path)) { csvReader.SetDelimiters(new string[] { "," }); csvReader.HasFieldsEnclosedInQuotes = true; string[] colFields = csvReader.ReadFields(); foreach (string column in colFields) { DataColumn datecolumn = new DataColumn(column); datecolumn.AllowDBNull = true; csvData.Columns.Add(datecolumn); } while (!csvReader.EndOfData) { string[] fieldData = csvReader.ReadFields(); //Making empty value as null for (int i = 0; i < fieldData.Length; i++) { if (fieldData[i] == "") { fieldData[i] = null; } } csvData.Rows.Add(fieldData); } } } catch (Exception ex) { } return csvData; } } See this article for more info : http://www.morgantechspace.com/2013/08/how-to-read-data-from-csv-file-in-c.html
DataSet
16,606,753
18
Does anyone know of any resources that provide good, useful stock datasets? For example, I've downloaded a SQL script that includes all of the U.S. states, cities, and zipcodes. This saved me a lot of time in a recent application where I wanted to be able to do lookups by geography. Are any of you aware of other useful datasets that are freely available for download? For example: Blacklisted IP addresses Names of colleges/universities Names of corporations/stock symbols Anyone have any recommendations? EDIT: As an example, here is the location where I found a MySQL script containing all of the U.S. zip codes and their corresponding latitude/longitude. Has anyone else found similarly useful datasets in SQL that can be easily imported and used? http://www.chrissibert.com/blog/wp-content/uploads/2009/06/zipcodes.7z EDIT 2: To clarify what type of datasets I'm talking about... I'm referring to datasets that can be immediately useful for applications, can be applied across a variety of scenarios, and typically represent information that is easy to find for small cases but harder to compile for larger data sets. The zip code database is a great example to me. It's not hard to get the lat/long for a single given zip code. But, it's a bit more time consuming to get the values for all valid zip codes in the U.S. This data is also not useful to a single industry or business sector, but can be applied across a range of applications.
Lots of links to open data sets here: http://readwrite.com/2008/04/09/where_to_find_open_data_on_the/ although I doubt any of them will generate SQL statements for you.
DataSet
4,512,600
18
Given a list of objects, I am needing to transform it into a dataset where each item in the list is represented by a row and each property is a column in the row. This DataSet will then be passed to an Aspose.Cells function in order to create an Excel document as a report. Say I have the following: public class Record { public int ID { get; set; } public bool Status { get; set; } public string Message { get; set; } } Given a List records, how do I transform it into a DataSet as follows: ID Status Message 1 true "message" 2 false "message2" 3 true "message3" ... At the moment the only thing I can think of is as follows: DataSet ds = new DataSet ds.Tables.Add(); ds.Tables[0].Add("ID", typeof(int)); ds.Tables[0].Add("Status", typeof(bool)); ds.Tables[0].Add("Message", typeof(string)); foreach(Record record in records) { ds.Tables[0].Rows.Add(record.ID, record.Status, record.Message); } But this way leaves me thinking there must be a better way since at the very least if new properties are added to Record then they won't show up in the DataSet...but at the same time it allows me to control the order each property is added to the row. Does anyone know of a better way to do this?
You can do it through reflection and generics, inspecting the properties of the underlying type. Consider this extension method that I use: public static DataTable ToDataTable<T>(this IEnumerable<T> collection) { DataTable dt = new DataTable("DataTable"); Type t = typeof(T); PropertyInfo[] pia = t.GetProperties(); //Inspect the properties and create the columns in the DataTable foreach (PropertyInfo pi in pia) { Type ColumnType = pi.PropertyType; if ((ColumnType.IsGenericType)) { ColumnType = ColumnType.GetGenericArguments()[0]; } dt.Columns.Add(pi.Name, ColumnType); } //Populate the data table foreach (T item in collection) { DataRow dr = dt.NewRow(); dr.BeginEdit(); foreach (PropertyInfo pi in pia) { if (pi.GetValue(item, null) != null) { dr[pi.Name] = pi.GetValue(item, null); } } dr.EndEdit(); dt.Rows.Add(dr); } return dt; }
DataSet
523,153
18
I would like to read in R a dataset from google drive as the screenshot indicated. Neither url <- "https://drive.google.com/file/d/1AiZda_1-2nwrxI8fLD0Y6e5rTg7aocv0" temp <- tempfile() download.file(url, temp) bank <- read.table(unz(temp, "bank-additional.csv")) unlink(temp) nor library(RCurl) bank_url <- dowload.file(url, "bank-additional.csv", method = 'curl') works. I have been working on this for many hours. Any hints or solutions would be really appreciate.
Try temp <- tempfile(fileext = ".zip") download.file("https://drive.google.com/uc?authuser=0&id=1AiZda_1-2nwrxI8fLD0Y6e5rTg7aocv0&export=download", temp) out <- unzip(temp, exdir = tempdir()) bank <- read.csv(out[14], sep = ";") str(bank) # 'data.frame': 4119 obs. of 21 variables: # $ age : int 30 39 25 38 47 32 32 41 31 35 ... # $ job : Factor w/ 12 levels "admin.","blue-collar",..: 2 8 8 8 1 8 1 3 8 2 ... # $ marital : Factor w/ 4 levels "divorced","married",..: 2 3 2 2 2 3 3 2 1 2 ... # <snip> The URL should correspond to the URL that you use to download the file using your browser. As @Mako212 points out, you can also make use of the googledrive package, substituting drive_download for download.file: library(googledrive) temp <- tempfile(fileext = ".zip") dl <- drive_download( as_id("1AiZda_1-2nwrxI8fLD0Y6e5rTg7aocv0"), path = temp, overwrite = TRUE) out <- unzip(temp, exdir = tempdir()) bank <- read.csv(out[14], sep = ";")
DataSet
47,851,761
17
I'm using a CSV dataset config element, which is reading from a file like this: abd sds ase sdd ssd cvv Which, basically, has a number of 3 letter random string. I'm assigning them to a variable called ${random_3}. Now, I want to use values from this list multiple times within the same thread, but each time I want to move to next. For example, I want the first sampler to use abd, the 2nd to use sds, then ase, etc. But if I just use ${random_3} then only the first one (abd) is used wherever it's referred to. Is there a way I can specify to loop through the values from the CSV dataset within a thread?
CSV Data Set Config works fine for this. All of the values need to be in one column in the file and assign them to the variable as described. Create a Thread Group that has as many threads for as many users as you want iterating over the file (i.e. acting on the HTTP Request). Assuming 1 user, set the number of threads to 1, loop count to 1. Inside the thread group you will need to add a Loop Controller or a While Controller. You indicated that you want to loop through the whole data set. If you add a loop controller you will want to set the Loop Count to 6, since you have 6 values, one for each value. Alternately (and easier for processing the whole file) add a While Controller instead of a Loop Controller and set ${random_3} as the while condition. It is important to set the CSV Data Set Recycle on EOF and Stop Thread on EOF values correctly. If you plan to iterate over the file more than once you must set "Recycle on EOF" to True (i.e. instruct jMeter to move back to the top of the CSV file). Set "Stop Thread on EOF" to False if you are using a loop controller, true if you are using a while controller and want to stop after reading the whole csv dataset. Add the CSV Data Set Config as a child of the controller along with the HTTP Request. View the results using any listener you want to use.
DataSet
7,317,943
17
Does anyone know of a tool that can inspect a specified schema and generate random data based on the tables and columns of that schema?
This is an interesting question. It is easy enough to generate random values - a simple loop round the data dictionary with calls to DBMS_RANDOM would do the trick. Except for two things. One is, as @FrustratedWithForms points out, there is the complication of foreign key constraints. Let's tip lookup values (reference data) into the mix too. The second is, random isn't very realistic. The main driver for using random data is a need for large volumes of data, probably for performance testing. But real datasets aren't random, they contain skews and clumps, variable string lengths, and of course patterns (especially where dates are concerned). So, rather than trying to generate random data I suggest you try to get a real dataset. Ideally your user/customer will be able to provide one, preferably anonymized. Otherwise try taking something which is already in the public domain, and massage it to fit your specific requirements. The Info Chimps are the top bananas when it comes to these matters. Check them out.
DataSet
6,189,275
17
What are the benefits of using the c# method DataRow.IsNull to determine a null value over checking if the row equals DbNull.value? if(ds.Tables[0].Rows[0].IsNull("ROWNAME")) {do stuff} vs if(ds.Tables[0].Rows[0]["ROWNAME"] == DbNull.value) {do stuff}
There is no real practical benefit. Use whichever one seems more readable to you. As to the particular differences between them, the basic answer is that IsNull queries the null state for a particular record within a column. Using == DBNull.Value actually retrieves the value and does substitution in the case that it's actually null. In other words, IsNull checks the state without actually retrieving the value, and thus is slightly faster (in theory, at least). It's theoretically possible for a column to return something other than DBNull.Value for a null value if you were to use a custom storage type, but this is never done (in my experience). If this were the case, IsNull would handle the case where the storage type used something other than DBNull.Value, but, again, I've never seen this done.
DataSet
5,599,390
17
I have to consume a .NET hosted web service from a Java application. Interoperability between the two is usually very good. The problem I'm running into is that the .NET application developer chose to expose data using the .NET DataSet object. There are lots of articles written as to why you should not do this and how it makes interoperability difficult: http://www.hanselman.com/blog/ReturningDataSetsFromWebServicesIsTheSpawnOfSatanAndRepresentsAllThatIsTrulyEvilInTheWorld.aspx http://www.lhotka.net/weblog/ThoughtsOnPassingDataSetObjectsViaWebServices.aspx https://web.archive.org/web/20210616111510/https://aspnet.4guysfromrolla.com/articles/051805-1.aspx http://www.theserverside.net/tt/articles/showarticle.tss?id=Top5WSMistakes My problem is that despite this not being recommended practice, I am stuck with having to consume a web service returning a DataSet with Java. When you generate a proxy for something like this with anything other than .NET you basically end up with an object that looks like this: @XmlElement(namespace = "http://www.w3.org/2001/XMLSchema", required = true) protected Schema schema; @XmlAnyElement(lax = true) protected Object any; This first field is the actual schema that should describe the DataSet. When I process this using JAX-WS and JAXB in Java, it bring all of XS-Schema in as Java objects to be represented here. Walking the object tree of JAXB is possible but not pretty. The any field represents the raw XML for the DataSet that is in the schema specified by the schema. The structure of the dataset is pretty consistent but the data types do change. I need access to the type information and the schema does vary from call to call. I've though of a few options but none seem like 'good' options. Trying to generate Java objects from the schema using JAXB at runtime seems to be a bad idea. This would be way too slow since it would need to happen everytime. Brute force walk the schema tree using the JAXB objects that JAX-WS brought in. Maybe instead of using JAXB to parse the schema it would be easier to deal with it as XML and use XPath to try and find the type information I need. Are there other options I have not considered? Is there a Java library to parse DataSet objects easily? What have other people done who may have similar situations?
Unfortunately I don't think there's an easy answer here. What I would do is create a Proxy web service in .NET that you could call from Java that would receive the dataset, transform it into something more consumable and then return that to your Java code.
DataSet
2,667,646
17
DataSets were one of the big things in .NET 1.0 and even now when using .NET 3.5 I still find myself having to use them....especially when I have to call a stored proc which returns a dataset which I then end up having to manually transform into an object to make it easier to work with. I've never really liked DataSets and have found them annoying to use...and as a result I've tended to keep my knowledge about them to a bare minimum (probably a very bad thing!). I also prefer to quickly transform them into an object or list of objects so I can easily manipulate them in my code. Have DataSets passed their use by date? With the advent of O/R mappers such as NHibernate, I'm wondering if DataSets will die out or is there still a place for them? At the moment, I'm torn between whether I should set aside time to revisit DataSets and learn how to use them properly or to get behind O/R mappers 100% and just ditch DataSets altogether. Do DataSets offer anything that technologies such as NHibernate and LINQ etc can't? If not, why then do we still use them?
For better or worse, the answer is simplicity. When the 2.0 Framework came out and TableAdapters were included in the process, it became ridiculously easy to get your basic CRUD type application, or even a front page showing data, rolling. Simply connect to your sever, drag your table(s) over, and the structure was in place, including foreign/primary/unique key references. Need to perform updates on this data? Use the wizard, specify your existing procedures, or let the wizard generate the adhoc/stored procedures for you. And you're done, hook that up to a GridView and you can quickly do lots of things: resort, requery, edit multiple records while disconnected, and update in single or bulk. This kind of convenience is hard to pass up when you're working on projects that want to get done fast. Plus having things in this native "DataTable" format become convenient for XML shenanigans if that is what you need since the DataSet model uses XML under the hood for a lot of stuff. I will admit that I have not checked out the latest versions of the ORMs out there, and I'm not sure if there is a wizard for LINQ that will do this much within a few clicks. And most people are a little slow to adapt newer technology as is, so it easy to see how it is still being used heavily. Seeing that new Dynamic Data Service site/project is built off of LINQ to SQL or LINQ to EF, I think the tide might finally change to the newer model.
DataSet
552,371
17
I run my spark application in yarn cluster. In my code I use number available cores of queue for creating partitions on my dataset: Dataset ds = ... ds.coalesce(config.getNumberOfCores()); My question: how can I get number available cores of queue by programmatically way and not by configuration?
There are ways to get both the number of executors and the number of cores in a cluster from Spark. Here is a bit of Scala utility code that I've used in the past. You should easily be able to adapt it to Java. There are two key ideas: The number of workers is the number of executors minus one or sc.getExecutorStorageStatus.length - 1. The number of cores per worker can be obtained by executing java.lang.Runtime.getRuntime.availableProcessors on a worker. The rest of the code is boilerplate for adding convenience methods to SparkContext using Scala implicits. I wrote the code for 1.x years ago, which is why it is not using SparkSession. One final point: it is often a good idea to coalesce to a multiple of your cores as this can improve performance in the case of skewed data. In practice, I use anywhere between 1.5x and 4x, depending on the size of data and whether the job is running on a shared cluster or not. import org.apache.spark.SparkContext import scala.language.implicitConversions class RichSparkContext(val sc: SparkContext) { def executorCount: Int = sc.getExecutorStorageStatus.length - 1 // one is the driver def coresPerExecutor: Int = RichSparkContext.coresPerExecutor(sc) def coreCount: Int = executorCount * coresPerExecutor def coreCount(coresPerExecutor: Int): Int = executorCount * coresPerExecutor } object RichSparkContext { trait Enrichment { implicit def enrichMetadata(sc: SparkContext): RichSparkContext = new RichSparkContext(sc) } object implicits extends Enrichment private var _coresPerExecutor: Int = 0 def coresPerExecutor(sc: SparkContext): Int = synchronized { if (_coresPerExecutor == 0) sc.range(0, 1).map(_ => java.lang.Runtime.getRuntime.availableProcessors).collect.head else _coresPerExecutor } } Update Recently, getExecutorStorageStatus has been removed. We have switched to using SparkEnv's blockManager.master.getStorageStatus.length - 1 (the minus one is for the driver again). The normal way to get to it, via env of SparkContext is not accessible outside of the org.apache.spark package. Therefore, we use an encapsulation violation pattern: package org.apache.spark object EncapsulationViolator { def sparkEnv(sc: SparkContext): SparkEnv = sc.env }
DataSet
47,399,087
16
So, imagine having access to sufficient data (millions of datapoints for training and testing) of sufficient quality. Please ignore concept drift for now and assume the data static and does not change over time. Does it even make sense to use all of that data in terms of the quality of the model? Brain and Webb (http://www.csse.monash.edu.au/~webb/Files/BrainWebb99.pdf) have included some results on experimenting with different dataset sizes. Their tested algorithms converge to being somewhat stable after training with 16,000 or 32,000 datapoints. However, since we're living in the big data world we have access to data sets of millions of points, so the paper is somewhat relevant but hugely outdated. Is there any know more recent research on the impact of dataset sizes on learning algorithms (Naive Bayes, Decision Trees, SVM, neural networks etc). When does a learning algorithm converge to a certain stable model for which more data does not increase the quality anymore? Can it happen after 50,000 datapoints, or maybe after 200,000 or only after 1,000,000? Is there a rule of thumb? Or maybe there is no way for an algorithm to converge to a stable model, to a certain equilibrium? Why am I asking this? Imagine a system with limited storage and a huge amount of unique models (thousands of models with their own unique dataset) and no way of increasing the storage. So limiting the size of a dataset is important. Any thoughts or research on this?
I did my master's thesis on this subject so I happen to know quite a bit about it. In a few words in the first part of my master's thesis, I took some really big datasets (~5,000,000 samples) and tested some machine learning algorithms on them by learning on different % of the dataset (learning curves). The hypothesis I made (I was using scikit-learn mostly) was not to optimize the parameters, using the default parameters for the algorithms (I had to make this hypothesis for practical reasons, without optimization some simulations took already more than 24 hours on a cluster). The first thing to note is that, effectively, every method will lead to a plateau for a certain portion of the dataset. You cannot, however, draw conclusions about the effective number of samples it takes for a plateau to be reached for the following reasons : Every dataset is different, for really simple datasets they can give you nearly everything they have to offer with 10 samples while some still have something to reveal after 12000 samples (See the Higgs dataset in my example above). The number of samples in a dataset is arbitrary, in my thesis I tested a dataset with wrong samples that were only added to mess with the algorithms. We can, however, differentiate two different types of algorithms that will have a different behavior: parametric (Linear, ...) and non-parametric (Random Forest, ...) models. If a plateau is reached with a non-parametric that means the rest of the dataset is "useless". As you can see while the Lightning method reaches a plateau very soon on my picture that doesn't mean that the dataset doesn't have anything left to offer but more than that is the best that the method can do. That's why non-parametric methods work the best when the model to get is complicated and can really benefit from a large number of training samples. So as for your questions : See above. Yes, it all depends on what is inside the dataset. For me, the only rule of thumb is to go with cross-validation. If you are in the situation in which you think that you will use 20,000 or 30,000 samples you're often in a case where cross-validation is not a problem. In my thesis, I computed the accuracy of my methods on a test set, and when I did not notice a significant improvement I determined the number of samples it took to get there. As I said there are some trends that you can observe (parametric methods tend to saturate more quickly than non-parametric) Sometimes when the dataset is not large enough you can take every datapoint you have and still have room for improvement if you had a larger dataset. In my thesis with no optimisation on the parameters, the Cifar-10 dataset behaved that way, even after 50,000 none of my algorithm had already converged. I'd add that optimizing the parameters of the algorithms have a big influence on the speed of convergence to a plateau, but it requires another step of cross-validation. Your last sentence is highly related to the subject of my thesis, but for me, it was more related to the memory and time available for doing the ML tasks. (As if you cover less than the whole dataset you'll have a smaller memory requirement and it will be faster). About that, the concept of "core-sets" could really be interesting for you. I hope I could help you, I had to stop because I could on and on about that but if you need more clarifications I'd be happy to help.
DataSet
25,665,017
16
I am working on sentiment analysis and I am using dataset given in this link: http://www.cs.jhu.edu/~mdredze/datasets/sentiment/index2.html and I have divided my dataset into 50:50 ratio. 50% are used as test samples and 50% are used as train samples and the features extracted from train samples and perform classification using Weka classifier, but my predication accuracy is about 70-75%. Can anybody suggest some other datasets which can help me to increase the result - I have used unigram, bigram and POStags as my features.
There are many sources to get sentiment analysis dataset: huge ngrams dataset from google storage.googleapis.com/books/ngrams/books/datasetsv2.html http://www.sananalytics.com/lab/twitter-sentiment/ http://inclass.kaggle.com/c/si650winter11/data http://nlp.stanford.edu/sentiment/treebank.html or you can look into this global ML dataset repository: https://archive.ics.uci.edu/ml Anyway, it does not mean it will help you to get a better accuracy for your current dataset because the corpus might be very different from your dataset. Apart from reducing the testing percentage vs training, you could: test other classifiers or fine tune all hyperparameters using semi-automated wrapper like CVParameterSelection or GridSearch, or even auto-weka if it fits. It is quite rare to use 50/50, 80/20 is quite a commonly occurring ratio. A better practice is to use: 60% for training, 20% for cross validation, 20% for testing.
DataSet
24,605,702
16
I'm implementing an R package, where I have several big .rda data files in the 'data' folder. When I build the package (with R CMD build to create the .tar.gz packed file), also the data files are included in the package, and since they are really big, this makes the build (as well the check) process very slow, and the final package size uselessly big. These data are downloaded from some DB through a function of the package, so the intent is not to include the data in the package, but to let the user populates the data folder from its own DB. The data that I use are for test, and it makes no sense to include them into the package. Summarizing my question is: is it possible to keep the data in the 'data' folder, but exclude them from the built package? Edit Ok, I found a first solution by creating a file named .Rbuildignore that contains a line: ^data/.+$ anyway the problem remains for the R CMD install and R CMD check processes, that do not take into account the .Rbuildignore file. Any suggestion to exclude a folder also from the install/check processes?
If you use .Rbuildignore you should first build then check your package (it's not a check-ignore). Here a few tests in a Debian environment and a random package: l@np350v5c:~/src/yapomif/pkg$ ls data DESCRIPTION man NAMESPACE R l@np350v5c:~/src/yapomif/pkg$ R > save(Formaldehyde, file = "data/formal.rda") l@np350v5c:~/src/yapomif/pkg$ ls -l totale 20 drwxr-xr-x 2 l l 4096 mag 1 01:31 data -rw-r--r-- 1 l l 349 apr 25 00:35 DESCRIPTION drwxr-xr-x 2 l l 4096 apr 25 01:10 man -rw-r--r-- 1 l l 1189 apr 25 00:33 NAMESPACE drwxr-xr-x 2 l l 4096 apr 25 01:02 R l@np350v5c:~/src/yapomif/pkg$ ls -l data/ totale 4 -rw-r--r-- 1 l l 229 mag 1 01:31 formal.rda Now i create exactly your .Rbuildignore l@np350v5c:~/src/yapomif/pkg$ em .Rbuildignore l@np350v5c:~/src/yapomif/pkg$ cat .Rbuildignore ^data/.+$ Ok let's build l@np350v5c:~/src/yapomif/pkg$ cd .. l@np350v5c:~/src/yapomif$ R CMD build pkg > tools:::.build_packages() * checking for file ‘pkg/DESCRIPTION’ ... OK * preparing ‘yapomif’: * checking DESCRIPTION meta-information ... OK * checking for LF line-endings in source and make files * checking for empty or unneeded directories Removed empty directory ‘yapomif/data’ * building ‘yapomif_0.8.tar.gz’ Fine (you see the message about yapomif/data). Now check the package l@np350v5c:~/src/yapomif$ R CMD check yapomif_0.8.tar.gz > tools:::.check_packages() * using log directory ‘/home/l/.src/yapomif/yapomif.Rcheck’ * using R version 3.1.0 (2014-04-10) * using platform: x86_64-pc-linux-gnu (64-bit) ... ... everything as usual Now let's check the file (moved to home directory to keep my development dir clean) l@np350v5c:~/src/yapomif$ mv yapomif_0.8.tar.gz ~ l@np350v5c:~/src/yapomif$ cd l@np350v5c:~$ tar xvzf yapomif_0.8.tar.gz l@np350v5c:~$ ls yapomif DESCRIPTION man NAMESPACE R so there is no data directory BUT if l@np350v5c:~/src/yapomif$ R CMD check pkg ... Undocumented data sets: ‘Formaldehyde’ So, as stated, first build, then check. HTH, Luca
DataSet
23,382,030
16
In my dataset I have a number of continuous and dummy variables. For analysis with glmnet, I want the continuous variables to be standardized but not the dummy variables. I currently do this manually by first defining a dummy vector of columns that have only values of [0,1] and then using the scale command on all the non-dummy columns. Problem is, this isn't very elegant. But glmnet has a built in standardize argument. By default will this standardize the dummies too? If so, is there an elegant way to tell glmnet's standardize argument to skip dummies?
In short, yes - this will standardize the dummy variables, but there's a reason for doing so. The glmnet function takes a matrix as an input for its X parameter, not a data frame, so it doesn't make the distinction for factor columns which you may have if the parameter was a data.frame. If you take a look at the R function, glmnet codes the standardize parameter internally as isd = as.integer(standardize) Which converts the R boolean to a 0 or 1 integer to feed to any of the internal FORTRAN functions (elnet, lognet, et. al.) If you go even further by examining the FORTRAN code (fixed width - old school!), you'll see the following block: subroutine standard1 (no,ni,x,y,w,isd,intr,ju,xm,xs,ym,ys,xv,jerr) 989 real x(no,ni),y(no),w(no),xm(ni),xs(ni),xv(ni) 989 integer ju(ni) 990 real, dimension (:), allocatable :: v allocate(v(1:no),stat=jerr) 993 if(jerr.ne.0) return 994 w=w/sum(w) 994 v=sqrt(w) 995 if(intr .ne. 0)goto 10651 995 ym=0.0 995 y=v*y 996 ys=sqrt(dot_product(y,y)-dot_product(v,y)**2) 996 y=y/ys 997 10660 do 10661 j=1,ni 997 if(ju(j).eq.0)goto 10661 997 xm(j)=0.0 997 x(:,j)=v*x(:,j) 998 xv(j)=dot_product(x(:,j),x(:,j)) 999 if(isd .eq. 0)goto 10681 999 xbq=dot_product(v,x(:,j))**2 999 vc=xv(j)-xbq 1000 xs(j)=sqrt(vc) 1000 x(:,j)=x(:,j)/xs(j) 1000 xv(j)=1.0+xbq/vc 1001 goto 10691 1002 Take a look at the lines marked 1000 - this is basically applying the standardization formula to the X matrix. Now statistically speaking, one does not generally standardize categorical variables to retain the interpretability of the estimated regressors. However, as pointed out by Tibshirani here, "The lasso method requires initial standardization of the regressors, so that the penalization scheme is fair to all regressors. For categorical regressors, one codes the regressor with dummy variables and then standardizes the dummy variables" - so while this causes arbitrary scaling between continuous and categorical variables, it's done for equal penalization treatment.
DataSet
17,887,747
16
I am creating my own R package and I was wondering what are the possible methods that I can use to add (time-series) datasets to my package. Here are the specifics: I have created a package subdirectory called data and I am aware that this is the location where I should save the datasets that I want to add to my package. I am also cognizant of the fact that the files containing the data may be .rda, .txt, or .csv files. Each series of data that I want to add to the package consists of a single column of numbers (eg. of the form 340 or 4.5) and each series of data differs in length. So far, I have saved all of the datasets into a .txt file. I have also successfully loaded the data using the data() function. Problem not solved, however. The problem is that each series of data loads as a factor except for the series greatest in length. The series that load as factors contain missing values (of the form '.'). I had to add these missing values in order to make each column of data the same in length. I tried saving the data as unequal columns, but I received an error message after calling data(). A consequence of adding missing values to get the data to load is that once the data is loaded, I need to remove the NA's in order to get on with my analysis of the data! So, this clearly is not a good way of doing things. Ideally (I suppose), I would like the data to load as numeric vectors or as a list. In this way, I wouldn't need the NA's appended to the end of each series. How do I solve this problem? Should I save all of the data into one single file? If so, in what format should I do it? Perhaps I should save the datasets into a number of files? Again, in which format? What is the best practical way of doing this? Any tips would greatly be appreciated.
I'm not sure if I understood your question correctly. But, if you edit your data in your favorite format and save with save(myediteddata, file="data.rda") The data should be loaded exactly the way you saw it in R. To load all files in data directory you should add LazyData: true To your DESCRIPTION file, in your package. If this don't help you could post one of your files and a print of the format you want, this will help us to help you ;)
DataSet
16,507,295
16
The MSDN claims that the order is : Child table: delete records. Parent table: insert, update, and delete records. Child table: insert and update records. I have a problem with that. Example : ParentTable have two records parent1(Id : 1) and parent2(Id : 2) ChildTable have a record child1(Id : 1, ParentId : 1) If we update the child1 to have a new parent parent2, and then we delete parent1. We have nothing to delete in child table We delete parent1 : we broke the constraint, because the child is still attached to parent1, unless we update it first. So what is the right order, and is the MSDN false on the subject? My personnals thoughts is Child table: delete records. Parent table: insert, update records. Child table: insert and update records. Parent table: delete records. But the problem is, with potentially unique constraint, we must always delete the records in a table before adding new... So I have no solution right now for commiting my datas to my database. Edit : thanks for the answers, but your corner case is my daily case... I opt for the ugly solution to disabled constraint, then update database, and re-enabled constraint. I'm still searching a better solution..
You have to take their context into account. MS said When updating related tables in a dataset, it is important to update in the proper sequence to reduce the chance of violating referential integrity constraints. in the context of writing client data application software. Why is it important to reduce the chance of violating referential integrity constraints? Because violating those constraints means more round trips between the dbms and the client, either for the client code to handle the constraint violations, or for the human user to handle the violations, more time taken, more load on the server, more opportunities for human error, and more chances for concurrent updates to change the underlying data (possibly confusing either the application code, the human user, or both). And why do they consider their procedure the right way? Because it provides a single process that will avoid referential integrity violations in almost all the common cases, and even in a lot of the uncommon ones. For example . . . If the update is a DELETE operation on the referenced table, and if foreign keys in the referencing tables are declared as ON DELETE CASCADE, then the optimal thing is to simply delete the referenced row (the parent row), and let the dbms manage the cascade. (This is also the optimal thing for ON DELETE SET DEFAULT, and for ON DELETE SET NULL.) If the update is a DELETE operation on the referenced table, and if foreign keys in the referencing tables are declared as ON DELETE RESTRICT, then the optimal thing is to delete all the referencing rows (child rows) first, then delete the referenced row. But, with proper use of transactions, MS's procedure leaves the database in a consistent state regardless. The value is that it's a single, client-side process to code and to maintain, even though it's not optimal in all cases. (That's often the case in software design--choosing a single way that's not optimal in all cases. ActiveRecord leaps to mind.) You said Example : ParentTable have two records parent1(Id : 1) and parent2(Id : 2) ChildTable have a record child1(Id : 1, ParentId : 1) If we update the child1 to have a new parent parent2, and the we delete parent1. We have nothing to delete in child table We delete parent1 : we broke the constraint, because the child is still attached to parent1, unless we update it first. That's not a referential integrity issue; it's a procedural issue. This problem clearly requires two transactions. Update the child to have a new parent, then commit. This data must be corrected regardless of what happens to the first parent. Specifically, this data must be corrected even if there are concurrent updates or other constraints that make it either temporarily or permanently impossible to delete the first parent. (This isn't a referential integrity issue, because there's no ON DELETE SET TO NEXT PARENT ID OR MAKE YOUR BEST GUESS clause in SQL foreign key constraints.) Delete the first parent, then commit. This might require first updating any number of child rows in any number of tables. In a huge organization, I can imagine some deletes like this taking weeks to finish.
DataSet
9,801,930
16
I want to know the difference between make_initializable_iterator and make_one_shot_iterator. 1. Tensorflow documentations said that A "one-shot" iterator does not currently support re-initialization. What exactly does that mean? 2. Are the following 2 snippets equivalent? Use make_initializable_iterator iterator = data_ds.make_initializable_iterator() data_iter = iterator.get_next() sess = tf.Session() sess.run(tf.global_variables_initializer()) for e in range(1, epoch+1): sess.run(iterator.initializer) while True: try: x_train, y_train = sess.run([data_iter]) _, cost = sess.run([train_op, loss_op], feed_dict={X: x_train, Y: y_train}) except tf.errors.OutOfRangeError: break sess.close() Use make_one_shot_iterator iterator = data_ds.make_one_shot_iterator() data_iter = iterator.get_next() sess = tf.Session() sess.run(tf.global_variables_initializer()) for e in range(1, epoch+1): while True: try: x_train, y_train = sess.run([data_iter]) _, cost = sess.run([train_op, loss_op], feed_dict={X: x_train, Y: y_train}) except tf.errors.OutOfRangeError: break sess.close()
Suppose you want to use the same code to do your training and validation. You might like to use the same iterator, but initialized to point to different datasets; something like the following: def _make_batch_iterator(filenames): dataset = tf.data.TFRecordDataset(filenames) ... return dataset.make_initializable_iterator() filenames = tf.placeholder(tf.string, shape=[None]) iterator = _make_batch_iterator(filenames) with tf.Session() as sess: for epoch in range(num_epochs): # Initialize iterator with training data sess.run(iterator.initializer, feed_dict={filenames: ['training.tfrecord']}) _train_model(...) # Re-initialize iterator with validation data sess.run(iterator.initializer, feed_dict={filenames: ['validation.tfrecord']}) _validate_model(...) With a one-shot iterator, you can't re-initialize it like this.
DataSet
48,091,693
15
Dataset<Row> dataFrame = ... ; StringIndexerModel labelIndexer = new StringIndexer() .setInputCol("label") .setOutputCol("indexedLabel") .fit(dataFrame); VectorIndexerModel featureIndexer = new VectorIndexer() .setInputCol("s") .setOutputCol("indexedFeatures") .setMaxCategories(4) .fit(dataFrame); IndexToString labelConverter = new IndexToString() .setInputCol("prediction") .setOutputCol("predictedLabel") .setLabels(labelIndexer.labels()); What is StringIndexer, VectorIndexer, IndexToString and what is the difference between them? How and When should I use them?
String Indexer - Use it if you want the Machine Learning algorithm to identify column as categorical variable or if want to convert the textual data to numeric data keeping the categorical context. e,g converting days(Monday, Tuesday...) to numeric representation. Vector Indexer- use this if we do not know the types of data incoming. so we leave the logic of differentiating between categorical and non categorical data to the algorithm using Vector Indexer. e,g - Data coming from 3rd Party API, where data is hidden and is ingested directly to the training model. Indexer to string- just opposite of String indexer, use this if the final output column was indexed using String Indexer and now we want to convert back its numeric representation to textual so that result can be understood better.
DataSet
44,195,535
15
I'm adding a datatable to a dataset like this: DataTable dtImage = new DataTable(); //some updates in the Datatable ds.Tables.Add(dtImage); But the next time, when the datatable gets updated, will it be reflected in the dataset? or we need to write some code to make it reflected? Also, I'm checking the dataset if the datatable exists in the dataset already using: if(!ds.Tables.Contains("dtImage")) ds.Tables.Add(dtImage); In the first iteration, ds.Tables.Contains("dtImage") is false, so, ds.Tables.Add(dtImage) adds the table to the dataset. But in the second iteration, ds.Tables.Contains("dtImage") is false again, but ds.Tables.Add(dtImage) throws an error: Datatable already belongs to this dataset. If the dataset doesn't contain the datatable named "dtImage", why is it throwing an error? Update: Thanks, that issue got solved. Pls answer this: But the next time, when the datatable gets updated, will it be reflected in the dataset? or we need to write some code to make it reflected?
I assume that you haven't set the TableName property of the DataTable, for example via constructor: var tbl = new DataTable("dtImage"); If you don't provide a name, it will be automatically created with "Table1", the next table will get "Table2" and so on. Then the solution would be to provide the TableName and then check with Contains(nameOfTable). To clarify it: You'll get an ArgumentException if that DataTable already belongs to the DataSet (the same reference). You'll get a DuplicateNameException if there's already a DataTable in the DataSet with the same name(not case-sensitive). http://msdn.microsoft.com/en-us/library/as4zy2kc.aspx
DataSet
12,178,823
15
I am looking for twitter or other social networking sites dataset for my project. I currently have the CAW 2.0 twitter dataset but it only contains tweets of users. I want a data that shows the number of friends, follower and such. It does not have to be twitter but I would prefer twitter or facebook. I already tried infochimps but apparently the file is not downloadable anymore for twitter. Can someone give me good websites for finding this kind of dataset. I am going to feed the dataset to hadoop.
Try the following three datasets: Contains around 97 milllion tweets: http://demeter.inf.ed.ac.uk/index.php?option=com_content&view=article&id=2:test-post-for-twitter&catid=1:twitter&Itemid=2 ed note: the dataset previously linked above is no longer available because of a request from Twitter to remove it. Contains user graph of 47 million users: http://an.kaist.ac.kr/traces/WWW2010.html Following dataset contains network as well as tweets, however the data was collected by snowball sampling or something hence the friends network is not uniform. It has around 10 million tweets you can mail the researcher for even more data. http://www.public.asu.edu/~mdechoud/datasets.html Though have a look at the license the data is distributed under. Hope this helps, Also can you tell me what kind of work are planning with this dataset? I have few hadoop / pig scripts to use with dataset
DataSet
3,340,810
15
Does anyone know if there is a DataSet class in Java like there is in .Net? I am familiar with EJB3 and the "java way" of doing data. However, I really still miss the seamless integration between database queries, xml and objects provided by the DataSet class. Has anyone found a Java implementation of DataSet (including DataTable, DataRow, etc)? Edit: Also if anyone has tutorials for the java flavor of DataSet, please share a link.
Have you looked at javax.sql.rowset.WebRowSet? From the Javadocs: The WebRowSetImpl provides the standard reference implementation, which may be extended if required. The standard WebRowSet XML Schema definition is available at the following URI: http://java.sun.com/xml/ns/jdbc/webrowset.xsd It describes the standard XML document format required when describing a RowSet object in XML and must be used be all standard implementations of the WebRowSet interface to ensure interoperability. In addition, the WebRowSet schema uses specific SQL/XML Schema annotations, thus ensuring greater cross platform inter-operability. This is an effort currently under way at the ISO organization. The SQL/XML definition is available at the following URI: http://standards.iso.org/iso/9075/2002/12/sqlxml
DataSet
1,194,971
15
Newbie here, trying to learn more about the micrometer. I'm currently exploring ways on how to accomplish this: I'm using Spring boot 2 with actuator and micrometer enabled. Consider the following class: @Component class MyService { @Autowired AuthorizeTransaction callbackTransaction; @Autowired AuthorizeTransaction immediateTransaction; private MeterRegistry meterRegistry; private Counter requestCounter; private Counter responseCounter; public MyService(MeterRegistry meterRegistry) { this.meterRegistry = meterRegistry; initCounters(); } private initCounters() { requestCounter = Counter.builder("iso_request") .tags("mti", "0100") // how do I change the value of this tag for other request types like 0200, 0120, etc., .register(meterRegistry); responseCounter = Counter.builder("iso_response") .tags("mti", "0101") .tags("response_code", "00") // how do I change the value of this tag for other response codes like 01, 09, etc., .register(meterRegistry); } public ISOMsg process(ISOMsg request) { ISOMsg response = null; try { switch(request.getMTI()) { // org.jboss.iso.ISOMsg case "0100": case "0200": if ("0100".equals(request.getMTI())) { requestCounter.increment(); } else { requestCounter.increment(); // I want to increment the counter of the same metric with tag mti=0200 } response = immediateTransaction.process(request); // here I want to increment the response counter but with different MTIs and response codes break; case "0120": case "0121" response = callbackTransaction.process(request); break; default: log.error("error here") } } catch (Exception e) { log.error("error here") } return response; } } I'm stuck here and have to create different counter variables for each combination of tag values and the readability of the code gets affected really bad. I've many switch case statements than the above example. There should be definitely an easy way to do this, however I'm unable to find.
To have "dynamic" tag values, simply skip the instantiation of the counters in the initCounters() method. Everytime the counter shall be increased, instantiate a counter by using its builder method and increment, for example: Counter.builder("iso_response") .tags("mti", request.getMTI()) .tags("response_code", myReponseCode) .register(meterRegistry) .increment(); In fact, as the io.micrometer.core.instrument.Counter.Builder.register method states in its JavaDoc, a new counter is returned only if a counter with the same tag values does not yet exist. This is because each registry is guaranteed to only create one counter for the same combination of name and tags.
Micrometer
59,592,118
23
Is there a way to turn off some of the returned metric values in Actuator/Micrometer? Looking at them now I'm seeing around 1000 and would like to whittle them down to a select few say 100 to actually be sent to our registry.
Let me elaborate on the answer posted by checketts with a few examples. You can enable/disable certain metrics in your application.yml like this (Spring Boot docs): management: metrics: enable: tomcat: true jvm: false process: false hikaricp: false system: false jdbc: false http: false logback: true Or in code by defining a MeterFilter bean: @Bean public MeterFilter meterFilter() { return new MeterFilter() { @Override public MeterFilterReply accept(Meter.Id id) { if(id.getName().startsWith("tomcat.")) { return MeterFilterReply.DENY; } if(id.getName().startsWith("jvm.")) { return MeterFilterReply.DENY; } if(id.getName().startsWith("process.")) { return MeterFilterReply.DENY; } if(id.getName().startsWith("system.")) { return MeterFilterReply.DENY; } return MeterFilterReply.NEUTRAL; } }; }
Micrometer
48,451,381
20
I am currently trying to migrate our prometheus lib to spring boot 2.0.3.RELEASE. We use a custom path for prometheus and so far we use a work around to ensure this. As there is the possibility for a custom path for the info- and health-endpoint, uses management.endpoint.<health/info>.path. I tried to specify management.endpoint.prometheus.path, but it was still just accessible under /actuator/prometheus. How can I use a custom path or prometheus? We enable prometheus using the following libs (snippet of our build.gradle) compile "org.springframework.boot:spring-boot-starter-actuator:2.0.3.RELEASE" compile "io.micrometer:micrometer-core:2.0.5" compile "io.micrometer:micrometer-registry-prometheus:2.0.5" we also use the import of the class PrometheusMetricsExportAutoConfiguration Your help is highly appreciated :)
From the reference documentation: By default, endpoints are exposed over HTTP under the /actuator path by using the ID of the endpoint. For example, the beans endpoint is exposed under /actuator/beans. If you want to map endpoints to a different path, you can use the management.endpoints.web.path-mapping property. Also, if you want change the base path, you can use management.endpoints.web.base-path. The following example remaps /actuator/health to /healthcheck: application.properties: management.endpoints.web.base-path=/ management.endpoints.web.path-mapping.health=healthcheck So, to remap the prometheus endpoint to a different path beneath /actuator you can use the following property: management.endpoints.web.path-mapping.prometheus=whatever-you-want The above will make the Prometheus endpoint available at /actuator/whatever-you-want If you want the Prometheus endpoint to be available at the root, you'll have to move all the endpoints there and remap it: management.endpoints.web.base-path=/ management.endpoints.web.path-mapping.prometheus=whatever-you-want The above will make the Prometheus endpoint available at /whatever-you-want but with the side-effect of also moving any other enabled endpoints up to / rather than being beneath /actuator.
Micrometer
51,195,237
16
I am using the new MicroMeter metrics in Spring Boot 2 version 2.0.0-RELEASE. When publishing metrics over the /actuator/metrics/{metric.name} endpoint, i get the following: For a DistributionSummary : "name": "sources.ingestion.rate", "measurements": [ { "statistic": "COUNT", "value": 5 }, { "statistic": "TOTAL", "value": 72169.44162067816 }, { "statistic": "MAX", "value": 17870.68010661754 } ], "availableTags": [] } For a Timer : { "name": "sources.ingestion", "measurements": [ { "statistic": "COUNT", "value": 5 }, { "statistic": "TOTAL_TIME", "value": 65.700878648 }, { "statistic": "MAX", "value": 22.661545322 } ], "availableTags": [] } Is it possible to enrich the measurements to add measures like mean, min, or percentiles ? For percentiles i tried using .publishPercentiles(0.5, 0.95), but that doesn't reflect on the actuator endpoint.
After discussions on Github, this is not currently implemented in Micrometer. More details directly on the Github Issues: https://github.com/micrometer-metrics/micrometer/issues/488#issuecomment-373249656 https://github.com/spring-projects/spring-boot/issues/12433 https://github.com/micrometer-metrics/micrometer/issues/457
Micrometer
49,166,271
16
I am new to using spring-boot metrics and started with micrometer. I couldn't find good examples(the fact that its new) for performing timer Metrics in my spring-boot app. I am using spring-boot-starter-web:2.0.2.RELEASE dependency . But running spring-boot server and starting jconsole, I didn't see it showing Metrics (MBeans),so I also explicitly included below dependency: spring-boot-starter-actuator:2.0.2.RELEASE Also micrometer dependency : 'io.micrometer:micrometer-registry-jmx:latest' After adding actuator ,it does show Metrics folder but I do not see my timer(app.timer)attribute in the list. Am I doing something wrong? Any suggestions appreciated! Below code snippet: MeterRegistry registry = new CompositeMeterRegistry(); long start = System.currentTimeMillis(); Timer timer = registry.timer("app.timer", "type", "ping"); timer.record(System.currentTimeMillis() - start, TimeUnit.MILLISECONDS); This works: Metrics.timer("app.timer").record(()-> { didSomeLogic; long t = timeOccurred - timeScheduled; LOG.info("recorded timer = {}", t); });
This could be. If you are using Spring Boot 2, just call Timer wherever you want, no need to use a constructor. public void methodName(){ //start Stopwatch stopwatch = Stopwatch.createStarted();// Google Guava // your job here // check time Metrics.timer("metric-name", "tag1", this.getClass().getSimpleName(), //class "tag2", new Exception().getStackTrace()[0].getMethodName()) // method .record(stopwatch.stop().elapsed()); }
Micrometer
51,952,855
15
I am trying to generate Prometheus metrics with using Micrometer.io with Spring Boot 2.0.0.RELEASE. When I am trying to expose the size of a List as Gauge, it keeps displaying NaN. In the documentation it says that; It is your responsibility to hold a strong reference to the state object that you are measuring with a Gauge. I have tried some different ways but I could not solve the problem. Here is my code with some trials. import io.micrometer.core.instrument.*; import io.swagger.backend.model.Product; import io.swagger.backend.service.ProductService; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; import java.util.List; import java.util.concurrent.atomic.AtomicInteger; @RestController @RequestMapping("metrics") public class ExampleController { private AtomicInteger atomicInteger = new AtomicInteger(); private ProductService productService; private final Gauge productGauge; @Autowired public HelloController(ProductService productService, MeterRegistry registry) { this.productService = productService; createGauge("product_gauge", productService.getProducts(), registry); } private void createGauge(String metricName, List<Product> products, MeterRegistry registry) { List<Product> products = productService.getProducts(); // #1 // this displays product_gauge as NaN AtomicInteger n = registry.gauge("product_gauge", new AtomicInteger(0)); n.set(1); n.set(2); // #2 // this also displays product_gauge as NaN Gauge .builder("product_gauge", products, List::size) .register(registry); // #3 // this displays also NaN testListReference = Arrays.asList(1, 2); Gauge .builder("random_gauge", testListReference, List::size) .register(registry); // #4 // this also displays NaN AtomicInteger currentHttpRequests = registry.gauge("current.http.requests", new AtomicInteger(0)); } @GetMapping(path = "/product/decrement") public Counter decrementAndGetProductCounter() { // decrement the gague by one } } Is there anyone who can help with this issue? Any help would be appreciated.
In all cases, you must hold a strong reference to the observed instance. When your createGauge() method is exited, all function stack allocated references are eligible for garbage collection. For #1, pass your atomicInteger field like this: registry.gauge("my_ai", atomicInteger);. Then increment/decrement as you wish. Whenever micrometer needs to query it, it will as long as it finds the reference. For #2, pass your productService field and a lambda. Basically whenever the gauge is queried, it will call that lambda with the provided object: registry.gauge("product_gauge", productService, productService -> productService.getProducts().size()); (No guarantee regarding syntax errors.)
Micrometer
50,821,924
15
I'd like to use Micrometer to record the execution time of an async method when it eventually happens. Is there a recommended way to do this? Example: Kafka Replying Template. I want to record the time it takes to actually execute the sendAndReceive call (sends a message on a request topic and receives a response on a reply topic). public Mono<String> sendRequest(Mono<String> request) { return request .map(r -> new ProducerRecord<String, String>(requestsTopic, r)) .map(pr -> { pr.headers() .add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, "reply-topic".getBytes())); return pr; }) .map(pr -> replyingKafkaTemplate.sendAndReceive(pr)) ... // further maps, filters, etc. Something like responseGenerationTimer.record(() -> replyingKafkaTemplate.sendAndReceive(pr))) won't work here; it just records the time that it takes to create the Supplier, not the actual execution time.
You can just metrics() from Mono/Flux() (have a look at metrics() here: https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html) then you can do something like public Mono<String> sendRequest(Mono<String> request) { return request .map(r -> new ProducerRecord<String, String>(requestsTopic, r)) .map(pr -> { pr.headers() .add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, "reply-topic".getBytes())); return pr; }) .map(pr -> replyingKafkaTemplate.sendAndReceive(pr)).name("my-metricsname").metrics() And e.g. in graphite you will see latency for this call measured (You can see more here: How to use Micrometer timer together with webflux endpoints)
Micrometer
49,311,495
14
I have a Spring boot app throwing out open metric stats using micrometer. For each of my HTTP endpoints, I can see the following metric which I believe tracks the number of requests for the given endpoint: http_server_requests_seconds_count My question is how do I use this in a Grafana query to present the number number of requests calling my endpoint say every minute? I tried http_client_requests_seconds_count{} and sum(rate(http_client_requests_seconds_count{}[1m])) but neither work. Thanks in advance.
rate(http_client_requests_seconds_count{}[1m]) will provide you the number of request your service received at a per-second rate. However by using [1m] it will only look at the last minute to calculate that number, and requires that you collect samples at a rate quicker than a minute. Meaning, you need to have collected 2 scrapes in that timeframe. increase(http_client_requests_seconds_count{}[1m]) would return how much that count increased in that timeframe, which is probably what you would want, though you still need to have 2 data points in that window to get a result. Other way you could accomplish your result: increase(http_client_requests_seconds_count{}[2m]) / 2 By looking over 2 minutes then dividing it, you will have more data and it will flatten spikes, so you'll get a smoother chart. rate(http_client_requests_seconds_count{}[1m]) * 60 By multiplying the rate by 60 you can change the per-second rate to a per-minute value. Here is a writeup you can dig into to learn more about how they are calculated and why increases might not exactly align with integer values: https://promlabs.com/blog/2021/01/29/how-exactly-does-promql-calculate-rates
Micrometer
66,282,512
13
After migrating spring boot from version 2.x to 3 we miss traceId and spanId in our logs. We removed all sleuth dependencies and added implementation 'io.micrometer:micrometer-core' implementation 'io.micrometer:micrometer-tracing' implementation 'io.micrometer:micrometer-tracing-bridge-brave' implementation platform('io.micrometer:micrometer-tracing-bom:latest.release') as well as logging.pattern.level: "%5p [${spring.application.name:},%X{traceId:-},%X{spanId:-}]" but no traceIds and spanIds are being logged. Is there something we missed?
You need actuator and a bridge, the rest you included is not needed: implementation 'org.springframework.boot:spring-boot-starter-actuator' implementation 'io.micrometer:micrometer-tracing-bridge-brave' If you also want to report your spans, you should add the zipkin reporter too: implementation 'org.springframework.boot:spring-boot-starter-actuator' implementation 'io.micrometer:micrometer-tracing-bridge-brave' implementation 'io.zipkin.reporter2:zipkin-reporter-brave' Here's an example on start.spring.io and there are a lot of samples in the micrometer-samples repo.
Micrometer
75,170,489
13