question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I am looking for a way to get all of the letters in a string before a : but I have no idea on where to start. Would I use regex? If so how? string = "Username: How are you today?" Can someone show me a example on what I could do?
Just use the split function. It returns a list, so you can keep the first element: >>> s1.split(':') ['Username', ' How are you today?'] >>> s1.split(':')[0] 'Username'
Split
27,387,415
212
I want to split a string by '=' charecter. But I want it to split on first instance only. How can I do that ? Here is a JavaScript example for '_' char but it doesn't work for me split string only on first instance of specified character Example : apple=fruit table price=5 When I try String.split('='); it gives [apple],[fruit table price],[5] But I need [apple],[fruit table price=5] Thanks
string.split("=", limit=2); As String.split(java.lang.String regex, int limit) explains: The array returned by this method contains each substring of this string that is terminated by another substring that matches the given expression or is terminated by the end of the string. The substrings in the array are in the order in which they occur in this string. If the expression does not match any part of the input then the resulting array has just one element, namely this string. The limit parameter controls the number of times the pattern is applied and therefore affects the length of the resulting array. If the limit n is greater than zero then the pattern will be applied at most n - 1 times, the array's length will be no greater than n, and the array's last entry will contain all input beyond the last matched delimiter. The string boo:and:foo, for example, yields the following results with these parameters: Regex Limit Result : 2 { "boo", "and:foo" } : 5 { "boo", "and", "foo" } : -2 { "boo", "and", "foo" } o 5 { "b", "", ":and:f", "", "" } o -2 { "b", "", ":and:f", "", "" } o 0 { "b", "", ":and:f" }
Split
18,462,826
204
How do I write a function to split and return an array for a string with delimiters in the C programming language? char* str = "JAN,FEB,MAR,APR,MAY,JUN,JUL,AUG,SEP,OCT,NOV,DEC"; str_split(str,',');
You can use the strtok() function to split a string (and specify the delimiter to use). Note that strtok() will modify the string passed into it. If the original string is required elsewhere make a copy of it and pass the copy to strtok(). EDIT: Example (note it does not handle consecutive delimiters, "JAN,,,FEB,MAR" for example): #include <stdio.h> #include <stdlib.h> #include <string.h> #include <assert.h> char** str_split(char* a_str, const char a_delim) { char** result = 0; size_t count = 0; char* tmp = a_str; char* last_comma = 0; char delim[2]; delim[0] = a_delim; delim[1] = 0; /* Count how many elements will be extracted. */ while (*tmp) { if (a_delim == *tmp) { count++; last_comma = tmp; } tmp++; } /* Add space for trailing token. */ count += last_comma < (a_str + strlen(a_str) - 1); /* Add space for terminating null string so caller knows where the list of returned strings ends. */ count++; result = malloc(sizeof(char*) * count); if (result) { size_t idx = 0; char* token = strtok(a_str, delim); while (token) { assert(idx < count); *(result + idx++) = strdup(token); token = strtok(0, delim); } assert(idx == count - 1); *(result + idx) = 0; } return result; } int main() { char months[] = "JAN,FEB,MAR,APR,MAY,JUN,JUL,AUG,SEP,OCT,NOV,DEC"; char** tokens; printf("months=[%s]\n\n", months); tokens = str_split(months, ','); if (tokens) { int i; for (i = 0; *(tokens + i); i++) { printf("month=[%s]\n", *(tokens + i)); free(*(tokens + i)); } printf("\n"); free(tokens); } return 0; } Output: $ ./main.exe months=[JAN,FEB,MAR,APR,MAY,JUN,JUL,AUG,SEP,OCT,NOV,DEC] month=[JAN] month=[FEB] month=[MAR] month=[APR] month=[MAY] month=[JUN] month=[JUL] month=[AUG] month=[SEP] month=[OCT] month=[NOV] month=[DEC]
Split
9,210,528
201
I am using split('\n') to get lines in one string, and found that ''.split() returns an empty list, [], while ''.split('\n') returns ['']. Is there any specific reason for such a difference? And is there any more convenient way to count lines in a string?
Question: I am using split('\n') to get lines in one string, and found that ''.split() returns an empty list, [], while ''.split('\n') returns ['']. The str.split() method has two algorithms. If no arguments are given, it splits on repeated runs of whitespace. However, if an argument is given, it is treated as a single delimiter with no repeated runs. In the case of splitting an empty string, the first mode (no argument) will return an empty list because the whitespace is eaten and there are no values to put in the result list. In contrast, the second mode (with an argument such as \n) will produce the first empty field. Consider if you had written '\n'.split('\n'), you would get two fields (one split, gives you two halves). Question: Is there any specific reason for such a difference? This first mode is useful when data is aligned in columns with variable amounts of whitespace. For example: >>> data = '''\ Shasta California 14,200 McKinley Alaska 20,300 Fuji Japan 12,400 ''' >>> for line in data.splitlines(): print(line.split()) ['Shasta', 'California', '14,200'] ['McKinley', 'Alaska', '20,300'] ['Fuji', 'Japan', '12,400'] The second mode is useful for delimited data such as CSV where repeated commas denote empty fields. For example: >>> data = '''\ Guido,BDFL,,Amsterdam Barry,FLUFL,,USA Tim,,,USA ''' >>> for line in data.splitlines(): print(line.split(',')) ['Guido', 'BDFL', '', 'Amsterdam'] ['Barry', 'FLUFL', '', 'USA'] ['Tim', '', '', 'USA'] Note, the number of result fields is one greater than the number of delimiters. Think of cutting a rope. If you make no cuts, you have one piece. Making one cut, gives two pieces. Making two cuts, gives three pieces. And so it is with Python's str.split(delimiter) method: >>> ''.split(',') # No cuts [''] >>> ','.split(',') # One cut ['', ''] >>> ',,'.split(',') # Two cuts ['', '', ''] Question: And is there any more convenient way to count lines in a string? Yes, there are a couple of easy ways. One uses str.count() and the other uses str.splitlines(). Both ways will give the same answer unless the final line is missing the \n. If the final newline is missing, the str.splitlines approach will give the accurate answer. A faster technique that is also accurate uses the count method but then corrects it for the final newline: >>> data = '''\ Line 1 Line 2 Line 3 Line 4''' >>> data.count('\n') # Inaccurate 3 >>> len(data.splitlines()) # Accurate, but slow 4 >>> data.count('\n') + (not data.endswith('\n')) # Accurate and fast 4 Question from @Kaz: Why the heck are two very different algorithms shoe-horned into a single function? The signature for str.split is about 20 years old, and a number of the APIs from that era are strictly pragmatic. While not perfect, the method signature isn't "terrible" either. For the most part, Guido's API design choices have stood the test of time. The current API is not without advantages. Consider strings such as: ps_aux_header = 'USER PID %CPU %MEM VSZ' patient_header = 'name,age,height,weight' When asked to break these strings into fields, people tend to describe both using the same English word, "split". When asked to read code such as fields = line.split() or fields = line.split(','), people tend to correctly interpret the statements as "splits a line into fields". Microsoft Excel's text-to-columns tool made a similar API choice and incorporates both splitting algorithms in the same tool. People seem to mentally model field-splitting as a single concept even though more than one algorithm is involved.
Split
16,645,083
197
i am having trouble splitting a string in c# with a delimiter of "][". For example the string "abc][rfd][5][,][." Should yield an array containing; abc rfd 5 , . But I cannot seem to get it to work, even if I try RegEx I cannot get a split on the delimiter. EDIT: Essentially I wanted to resolve this issue without the need for a Regular Expression. The solution that I accept is; string Delimiter = "]["; var Result[] = StringToSplit.Split(new[] { Delimiter }, StringSplitOptions.None); I am glad to be able to resolve this split question.
To show both string.Split and Regex usage: string input = "abc][rfd][5][,][."; string[] parts1 = input.Split(new string[] { "][" }, StringSplitOptions.None); string[] parts2 = Regex.Split(input, @"\]\[");
Split
1,254,577
197
I have a SQL Server 2008 R2 column containing a string which I need to split by a comma. I have seen many answers on StackOverflow but none of them works in R2. I have made sure I have select permissions on any split function examples. Any help greatly appreciated.
I've used this SQL before which may work for you:- CREATE FUNCTION dbo.splitstring ( @stringToSplit VARCHAR(MAX) ) RETURNS @returnList TABLE ([Name] [nvarchar] (500)) AS BEGIN DECLARE @name NVARCHAR(255) DECLARE @pos INT WHILE CHARINDEX(',', @stringToSplit) > 0 BEGIN SELECT @pos = CHARINDEX(',', @stringToSplit) SELECT @name = SUBSTRING(@stringToSplit, 1, @pos-1) INSERT INTO @returnList SELECT @name SELECT @stringToSplit = SUBSTRING(@stringToSplit, @pos+1, LEN(@stringToSplit)-@pos) END INSERT INTO @returnList SELECT @stringToSplit RETURN END and to use it:- SELECT * FROM dbo.splitstring('91,12,65,78,56,789')
Split
10,914,576
191
I have emacs split horizontally - on top I'm editing Perl code, the bottom is the shell. By default emacs makes the two windows equal in size, but I'd like the shell buffer smaller (maybe half the size?). I was wondering how I could do that.
With the mouse, you can drag the window sizes around. Click anywhere on the mode line that is not otherwise 'active' (the buffer name is safe, or any unused area to the right hand side), and you can drag up or down. Side-to-side dragging requires a very precise click on the spot where the two mode lines join. C-x - (shrink-window-if-larger-than-buffer) will shrink a window to fit its content. C-x + (balance-windows) will make windows the same heights and widths. C-x ^ (enlarge-window) increases the height by 1 line, or the prefix arg value. A negative arg shrinks the window. e.g. C-- C-1 C-6 C-x ^ shrinks by 16 rows, as does C-u - 1 6 C-x ^. (There is no default binding for shrink-window.) C-x } (enlarge-window-horizontally) does likewise, horizontally. C-x { (shrink-window-horizontally) is also bound by default. Following one of these commands with repeat (C-x z to initiate, and just z for continued repetition) makes it pretty easy to get to the exact size you want. If you regularly want to do this with a specific value, you could record a keyboard macro to do it, or use something like (global-set-key (kbd "C-c v") (kbd "C-u - 1 6 C-x ^")) Or this: (global-set-key (kbd "C-c v") (kbd "C-x o C-x 2 C-x 0 C-u - 1 C-x o")) Which is a smidgen hacky, so this would be better: (defun halve-other-window-height () "Expand current window to use half of the other window's lines." (interactive) (enlarge-window (/ (window-height (next-window)) 2))) (global-set-key (kbd "C-c v") 'halve-other-window-height) Tangentially, I also love winner-mode which lets you repeatedly 'undo' any changes to window configurations with C-c left (whether the change is the size/number/arrangement of the windows, or just which buffer is displayed). C-c right returns you to the most recent configuration. Set it globally with (winner-mode 1)
Split
4,987,760
190
I have a text file. I need to get a list of sentences. How can this be implemented? There are a lot of subtleties, such as a dot being used in abbreviations. My old regular expression works badly: re.compile('(\. |^|!|\?)([A-Z][^;↑\.<>@\^&/\[\]]*(\.|!|\?) )',re.M)
The Natural Language Toolkit (nltk.org) has what you need. This group posting indicates this does it: import nltk.data tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') fp = open("test.txt") data = fp.read() print '\n-----\n'.join(tokenizer.tokenize(data)) (I haven't tried it!)
Split
4,576,077
187
I have two tmux windows, with a single pane in each, and I would like to join these two panes together into a single window as a horizontal split panes. How could I do that?
Actually I found the way to do that. Suppose the two windows are number 1 and 2. Use join-pane -s 2 -t 1 This will move the 2nd window as a pane to the 1st window. The opposite command is break-pane
Split
9,592,969
183
I am using the String split method and I want to have the last element. The size of the Array can change. Example: String one = "Düsseldorf - Zentrum - Günnewig Uebachs" String two = "Düsseldorf - Madison" I want to split the above Strings and get the last item: lastone = one.split("-")[here the last item] // <- how? lasttwo = two.split("-")[here the last item] // <- how? I don't know the sizes of the arrays at runtime :(
You could use lastIndexOf() method on String String last = string.substring(string.lastIndexOf('-') + 1);
Split
1,181,969
178
I have the output of a command in tabular form. I'm parsing this output from a result file and storing it in a string. Each element in one row is separated by one or more whitespace characters, thus I'm using regular expressions to match 1 or more spaces and split it. However, a space is being inserted between every element: >>> str1 = "a b c d" # spaces are irregular >>> str1 'a b c d' >>> str2 = re.split("( )+", str1) >>> str2 ['a', ' ', 'b', ' ', 'c', ' ', 'd'] # 1 space element between! Is there a better way to do this? After each split str2 is appended to a list.
By using (,), you are capturing the group, if you simply remove them you will not have this problem. >>> str1 = "a b c d" >>> re.split(" +", str1) ['a', 'b', 'c', 'd'] However there is no need for regex, str.split without any delimiter specified will split this by whitespace for you. This would be the best way in this case. >>> str1.split() ['a', 'b', 'c', 'd'] If you really wanted regex you can use this ('\s' represents whitespace and it's clearer): >>> re.split("\s+", str1) ['a', 'b', 'c', 'd'] or you can find all non-whitespace characters >>> re.findall(r'\S+',str1) ['a', 'b', 'c', 'd']
Split
10,974,932
176
What is the point of '/segment/segment/'.split('/') returning ['', 'segment', 'segment', '']? Notice the empty elements. If you're splitting on a delimiter that happens to be at position one and at the very end of a string, what extra value does it give you to have the empty string returned from each end?
str.split complements str.join, so "/".join(['', 'segment', 'segment', '']) gets you back the original string. If the empty strings were not there, the first and last '/' would be missing after the join().
Split
2,197,451
175
I am trying to use train_test_split from package scikit Learn, but I am having trouble with parameter stratify. Hereafter is the code: from sklearn import cross_validation, datasets X = iris.data[:,:2] y = iris.target cross_validation.train_test_split(X,y,stratify=y) However, I keep getting the following problem: raise TypeError("Invalid parameters passed: %s" % str(options)) TypeError: Invalid parameters passed: {'stratify': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])} Does someone have an idea what is going on? Below is the function documentation. [...] stratify : array-like or None (default is None) If not None, data is split in a stratified fashion, using this as the labels array. New in version 0.17: stratify splitting [...]
This stratify parameter makes a split so that the proportion of values in the sample produced will be the same as the proportion of values provided by parameter stratify. For example: a binary categorical classification problem, if y is the dependent variable or target\label column within dataframe following values: 0 25% data is zeros 1 75% data is ones Then stratify=y will make sure that your random split has: 25% of 0's 75% of 1's
Split
34,842,405
172
I want to format this date: <div id="date">23/05/2013</div>. First I want to split the string at the first / and have the rest in the next line. Next, I’d like to surround the first part in a <span> tag, as follows: <div id="date"> <span>23</span> 05/2013</div> 23 05/2013 What I did: <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div id="date">23/05/2013</div> <script type="text/javascript"> $(document).ready(function() { $("#date").text().substring(0, 2) + '<br />'; }); </script> See the JSFiddle. But this does not work. Can someone help me with jQuery?
Using split() Snippet : var data =$('#date').text(); var arr = data.split('/'); $("#date").html("<span>"+arr[0] + "</span></br>" + arr[1]+"/"+arr[2]); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div id="date">23/05/2013</div> Fiddle When you split this string ---> 23/05/2013 on / var myString = "23/05/2013"; var arr = myString.split('/'); you'll get an array of size 3 arr[0] --> 23 arr[1] --> 05 arr[2] --> 2013
Split
16,711,504
169
In Python it is possible to split a string and assign it to variables: ip, port = '127.0.0.1:5432'.split(':') but in Go it does not seem to work: ip, port := strings.Split("127.0.0.1:5432", ":") // assignment count mismatch: 2 = 1 Question: How to split a string and assign values in one step?
Two steps, for example, package main import ( "fmt" "strings" ) func main() { s := strings.Split("127.0.0.1:5432", ":") ip, port := s[0], s[1] fmt.Println(ip, port) } Output: 127.0.0.1 5432 One step, for example, package main import ( "fmt" "net" ) func main() { host, port, err := net.SplitHostPort("127.0.0.1:5432") fmt.Println(host, port, err) } Output: 127.0.0.1 5432 <nil>
Split
16,551,354
168
I want to split this line: string line = "First Name ; string ; firstName"; into an array of their trimmed versions: "First Name" "string" "firstName" How can I do this all on one line? The following gives me an error "cannot convert type void": List<string> parts = line.Split(';').ToList().ForEach(p => p.Trim());
Try List<string> parts = line.Split(';').Select(p => p.Trim()).ToList(); FYI, the Foreach method takes an Action (takes T and returns void) for parameter, and your lambda return a string as string.Trim return a string Foreach extension method is meant to modify the state of objects within the collection. As string are immutable, this would have no effect
Split
1,728,303
168
I just learned about Java's Scanner class and now I'm wondering how it compares/competes with the StringTokenizer and String.Split. I know that the StringTokenizer and String.Split only work on Strings, so why would I want to use the Scanner for a String? Is Scanner just intended to be one-stop-shopping for spliting?
They're essentially horses for courses. Scanner is designed for cases where you need to parse a string, pulling out data of different types. It's very flexible, but arguably doesn't give you the simplest API for simply getting an array of strings delimited by a particular expression. String.split() and Pattern.split() give you an easy syntax for doing the latter, but that's essentially all that they do. If you want to parse the resulting strings, or change the delimiter halfway through depending on a particular token, they won't help you with that. StringTokenizer is even more restrictive than String.split(), and also a bit fiddlier to use. It is essentially designed for pulling out tokens delimited by fixed substrings. Because of this restriction, it's about twice as fast as String.split(). (See my comparison of String.split() and StringTokenizer.) It also predates the regular expressions API, of which String.split() is a part. You'll note from my timings that String.split() can still tokenize thousands of strings in a few milliseconds on a typical machine. In addition, it has the advantage over StringTokenizer that it gives you the output as a string array, which is usually what you want. Using an Enumeration, as provided by StringTokenizer, is too "syntactically fussy" most of the time. From this point of view, StringTokenizer is a bit of a waste of space nowadays, and you may as well just use String.split().
Split
691,184
168
Why am I getting... Uncaught TypeError: string.split is not a function ...when I run... var string = document.location; var split = string.split('/');
Change this... var string = document.location; to this... var string = document.location + ''; This is because document.location is a Location object. The default .toString() returns the location in string form, so the concatenation will trigger that. You could also use document.URL to get a string.
Split
10,145,946
158
How to split the string "Thequickbrownfoxjumps" to substrings of equal size in Java. Eg. "Thequickbrownfoxjumps" of 4 equal size should give the output. ["Theq","uick","brow","nfox","jump","s"] Similar Question: Split string into equal-length substrings in Scala
Here's the regex one-liner version: System.out.println(Arrays.toString( "Thequickbrownfoxjumps".split("(?<=\\G.{4})") )); \G is a zero-width assertion that matches the position where the previous match ended. If there was no previous match, it matches the beginning of the input, the same as \A. The enclosing lookbehind matches the position that's four characters along from the end of the last match. Both lookbehind and \G are advanced regex features, not supported by all flavors. Furthermore, \G is not implemented consistently across the flavors that do support it. This trick will work (for example) in Java, Perl, .NET and JGSoft, but not in PHP (PCRE), Ruby 1.9+ or TextMate (both Oniguruma). JavaScript's /y (sticky flag) isn't as flexible as \G, and couldn't be used this way even if JS did support lookbehind. I should mention that I don't necessarily recommend this solution if you have other options. The non-regex solutions in the other answers may be longer, but they're also self-documenting; this one's just about the opposite of that. ;) Also, this doesn't work in Android, which doesn't support the use of \G in lookbehinds.
Split
3,760,152
157
I have a data frame, like so: data.frame(director = c("Aaron Blaise,Bob Walker", "Akira Kurosawa", "Alan J. Pakula", "Alan Parker", "Alejandro Amenabar", "Alejandro Gonzalez Inarritu", "Alejandro Gonzalez Inarritu,Benicio Del Toro", "Alejandro González Iñárritu", "Alex Proyas", "Alexander Hall", "Alfonso Cuaron", "Alfred Hitchcock", "Anatole Litvak", "Andrew Adamson,Marilyn Fox", "Andrew Dominik", "Andrew Stanton", "Andrew Stanton,Lee Unkrich", "Angelina Jolie,John Stevenson", "Anne Fontaine", "Anthony Harvey"), AB = c('A', 'B', 'A', 'A', 'B', 'B', 'B', 'A', 'B', 'A', 'B', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'A')) As you can see, some entries in the director column are multiple names separated by commas. I would like to split these entries up into separate rows while maintaining the values of the other column. As an example, the first row in the data frame above should be split into two rows, with a single name each in the director column and 'A' in the AB column.
Several alternatives: 1) two ways with data.table: library(data.table) # method 1 (preferred) setDT(v)[, lapply(.SD, function(x) unlist(tstrsplit(x, ",", fixed=TRUE))), by = AB ][!is.na(director)] # method 2 setDT(v)[, strsplit(as.character(director), ",", fixed=TRUE), by = .(AB, director) ][,.(director = V1, AB)] 2) a dplyr / tidyr combination: library(dplyr) library(tidyr) v %>% mutate(director = strsplit(as.character(director), ",")) %>% unnest(director) 3) with tidyr only: With tidyr 0.5.0 (and later), you can also just use separate_rows: separate_rows(v, director, sep = ",") You can use the convert = TRUE parameter to automatically convert numbers into numeric columns. With tidyr_1.3.0 (and later), you can use separate_longer_delim (and separate_rows is now superseded): separate_longer_delim(v, director, delim = ",") 4) with base R: # if 'director' is a character-column: stack(setNames(strsplit(df$director,','), df$AB)) # if 'director' is a factor-column: stack(setNames(strsplit(as.character(df$director),','), df$AB))
Split
13,773,770
157
I have a log file with size of 2.5 GB. Is there any way to split this file into smaller files using windows command prompt?
If you have installed Git for Windows, you should have Git Bash installed, since that comes with Git. Use the split command in Git Bash to split a file: into files of size 500MB each: split myLargeFile.txt -b 500m into files with 10000 lines each: split myLargeFile.txt -l 10000 Tips: If you don't have Git/Git Bash, download at https://git-scm.com/download If you lost the shortcut to Git Bash, you can run it using C:\Program Files\Git\git-bash.exe That's it! I always like examples though... Example: You can see in this image that the files generated by split are named xaa, xab, xac, etc. These names are made up of a prefix and a suffix, which you can specify. Since I didn't specify what I want the prefix or suffix to look like, the prefix defaulted to x, and the suffix defaulted to a two-character alphabetical enumeration. Another Example: This example demonstrates using a filename prefix of MySlice (instead of the default x), the -d flag for using numerical suffixes (instead of aa, ab, ac, etc...), and the option -a 5 to tell it I want the suffixes to be 5 digits long:
Split
31,786,287
155
Input: "tableapplechairtablecupboard..." many words What would be an efficient algorithm to split such text to the list of words and get: Output: ["table", "apple", "chair", "table", ["cupboard", ["cup", "board"]], ...] First thing that cames to mind is to go through all possible words (starting with first letter) and find the longest word possible, continue from position=word_position+len(word) P.S. We have a list of all possible words. Word "cupboard" can be "cup" and "board", select longest. Language: python, but main thing is the algorithm itself.
A naive algorithm won't give good results when applied to real-world data. Here is a 20-line algorithm that exploits relative word frequency to give accurate results for real-word text. (If you want an answer to your original question which does not use word frequency, you need to refine what exactly is meant by "longest word": is it better to have a 20-letter word and ten 3-letter words, or is it better to have five 10-letter words? Once you settle on a precise definition, you just have to change the line defining wordcost to reflect the intended meaning.) The idea The best way to proceed is to model the distribution of the output. A good first approximation is to assume all words are independently distributed. Then you only need to know the relative frequency of all words. It is reasonable to assume that they follow Zipf's law, that is the word with rank n in the list of words has probability roughly 1/(n log N) where N is the number of words in the dictionary. Once you have fixed the model, you can use dynamic programming to infer the position of the spaces. The most likely sentence is the one that maximizes the product of the probability of each individual word, and it's easy to compute it with dynamic programming. Instead of directly using the probability we use a cost defined as the logarithm of the inverse of the probability to avoid overflows. The code from math import log # Build a cost dictionary, assuming Zipf's law and cost = -math.log(probability). words = open("words-by-frequency.txt").read().split() wordcost = dict((k, log((i+1)*log(len(words)))) for i,k in enumerate(words)) maxword = max(len(x) for x in words) def infer_spaces(s): """Uses dynamic programming to infer the location of spaces in a string without spaces.""" # Find the best match for the i first characters, assuming cost has # been built for the i-1 first characters. # Returns a pair (match_cost, match_length). def best_match(i): candidates = enumerate(reversed(cost[max(0, i-maxword):i])) return min((c + wordcost.get(s[i-k-1:i], 9e999), k+1) for k,c in candidates) # Build the cost array. cost = [0] for i in range(1,len(s)+1): c,k = best_match(i) cost.append(c) # Backtrack to recover the minimal-cost string. out = [] i = len(s) while i>0: c,k = best_match(i) assert c == cost[i] out.append(s[i-k:i]) i -= k return " ".join(reversed(out)) which you can use with s = 'thumbgreenappleactiveassignmentweeklymetaphor' print(infer_spaces(s)) The results I am using this quick-and-dirty 125k-word dictionary I put together from a small subset of Wikipedia. Before: thumbgreenappleactiveassignmentweeklymetaphor. After: thumb green apple active assignment weekly metaphor. Before: thereismassesoftextinformationofpeoplescommentswhichisparsedfromhtmlbuttherearen odelimitedcharactersinthemforexamplethumbgreenappleactiveassignmentweeklymetapho rapparentlytherearethumbgreenappleetcinthestringialsohavealargedictionarytoquery whetherthewordisreasonablesowhatsthefastestwayofextractionthxalot. After: there is masses of text information of peoples comments which is parsed from html but there are no delimited characters in them for example thumb green apple active assignment weekly metaphor apparently there are thumb green apple etc in the string i also have a large dictionary to query whether the word is reasonable so what s the fastest way of extraction thx a lot. Before: itwasadarkandstormynighttherainfellintorrentsexceptatoccasionalintervalswhenitwascheckedbyaviolentgustofwindwhichsweptupthestreetsforitisinlondonthatoursceneliesrattlingalongthehousetopsandfiercelyagitatingthescantyflameofthelampsthatstruggledagainstthedarkness. After: it was a dark and stormy night the rain fell in torrents except at occasional intervals when it was checked by a violent gust of wind which swept up the streets for it is in london that our scene lies rattling along the housetops and fiercely agitating the scanty flame of the lamps that struggled against the darkness. As you can see it is essentially flawless. The most important part is to make sure your word list was trained to a corpus similar to what you will actually encounter, otherwise the results will be very bad. Optimization The implementation consumes a linear amount of time and memory, so it is reasonably efficient. If you need further speedups, you can build a suffix tree from the word list to reduce the size of the set of candidates. If you need to process a very large consecutive string it would be reasonable to split the string to avoid excessive memory usage. For example you could process the text in blocks of 10000 characters plus a margin of 1000 characters on either side to avoid boundary effects. This will keep memory usage to a minimum and will have almost certainly no effect on the quality.
Split
8,870,261
154
I have DataFrame with column Sales. How can I split it into 2 based on Sales value? First DataFrame will have data with 'Sales' < s and second with 'Sales' >= s
You can use boolean indexing: df = pd.DataFrame({'Sales':[10,20,30,40,50], 'A':[3,4,7,6,1]}) print (df) A Sales 0 3 10 1 4 20 2 7 30 3 6 40 4 1 50 s = 30 df1 = df[df['Sales'] >= s] print (df1) A Sales 2 7 30 3 6 40 4 1 50 df2 = df[df['Sales'] < s] print (df2) A Sales 0 3 10 1 4 20 It's also possible to invert mask by ~: mask = df['Sales'] >= s df1 = df[mask] df2 = df[~mask] print (df1) A Sales 2 7 30 3 6 40 4 1 50 print (df2) A Sales 0 3 10 1 4 20 print (mask) 0 False 1 False 2 True 3 True 4 True Name: Sales, dtype: bool print (~mask) 0 True 1 True 2 False 3 False 4 False Name: Sales, dtype: bool
Split
33,742,588
153
I'm using jQuery, and I have a textarea. When I submit by my button I will alert each text separated by newline. How to split my text when there is a newline? var ks = $('#keywords').val().split("\n"); (function($){ $(document).ready(function(){ $('#data').submit(function(e){ e.preventDefault(); alert(ks[0]); $.each(ks, function(k){ alert(k); }); }); }); })(jQuery); example input : Hello There Result I want is : alert(Hello); and alert(There)
You should parse newlines regardless of the platform (operation system) This split is universal with regular expressions. You may consider using this: var ks = $('#keywords').val().split(/\r?\n/); E.g. "a\nb\r\nc\r\nlala".split(/\r?\n/) // ["a", "b", "c", "lala"]
Split
8,125,709
153
I need to split a String into an array of single character Strings. Eg, splitting "cat" would give the array "c", "a", "t"
"cat".split("(?!^)") This will produce array ["c", "a", "t"]
Split
5,235,401
152
What is the best way to split a string like "HELLO there HOW are YOU" by upper-case words? So I'd end up with an array like such: results = ['HELLO there', 'HOW are', 'YOU'] I have tried: p = re.compile("\b[A-Z]{2,}\b") print p.split(page_text) It doesn't seem to work, though.
I suggest l = re.compile("(?<!^)\s+(?=[A-Z])(?!.\s)").split(s) Check this demo.
Split
13,209,288
150
hello I am trying what I thought would be a rather easy regex in Javascript but is giving me lots of trouble. I want the ability to split a date via javascript splitting either by a '-','.','/' and ' '. var date = "02-25-2010"; var myregexp2 = new RegExp("-."); dateArray = date.split(myregexp2); What is the correct regex for this any and all help would be great.
You need the put the characters you wish to split on in a character class, which tells the regular expression engine "any of these characters is a match". For your purposes, this would look like: date.split(/[.,\/ -]/) Although dashes have special meaning in character classes as a range specifier (ie [a-z] means the same as [abcdefghijklmnopqrstuvwxyz]), if you put it as the last thing in the class it is taken to mean a literal dash and does not need to be escaped. To explain why your pattern didn't work, /-./ tells the regular expression engine to match a literal dash character followed by any character (dots are wildcard characters in regular expressions). With "02-25-2010", it would split each time "-2" is encountered, because the dash matches and the dot matches "2".
Split
3,559,883
149
I have a very large dataframe (around 1 million rows) with data from an experiment (60 respondents). I would like to split the dataframe into 60 dataframes (a dataframe for each participant). In the dataframe, data, there is a variable called 'name', which is the unique code for each participant. I have tried the following, but nothing happens (or execution does not stop within an hour). What I intend to do is to split the data into smaller dataframes, and append these to a list (datalist): import pandas as pd def splitframe(data, name='name'): n = data[name][0] df = pd.DataFrame(columns=data.columns) datalist = [] for i in range(len(data)): if data[name][i] == n: df = df.append(data.iloc[i]) else: datalist.append(df) df = pd.DataFrame(columns=data.columns) n = data[name][i] df = df.append(data.iloc[i]) return datalist I do not get an error message, the script just seems to run forever! Is there a smart way to do it?
Can I ask why not just do it by slicing the data frame. Something like #create some data with Names column data = pd.DataFrame({'Names': ['Joe', 'John', 'Jasper', 'Jez'] *4, 'Ob1' : np.random.rand(16), 'Ob2' : np.random.rand(16)}) #create unique list of names UniqueNames = data.Names.unique() #create a data frame dictionary to store your data frames DataFrameDict = {elem : pd.DataFrame() for elem in UniqueNames} for key in DataFrameDict.keys(): DataFrameDict[key] = data[:][data.Names == key] Hey presto you have a dictionary of data frames just as (I think) you want them. Need to access one? Just enter DataFrameDict['Joe']
Split
19,790,790
147
Invoking :help in Vim, I got the help manual page with split window. I want to maximize the help manual window and close the other window. How can I do this? What is the Vim command to do this?
You can employ Ctrl+WT (that's a capital T) to move any open window to its own tab. As mentioned by others Ctrl+W_ / Ctrl+W| to maximize within the current tab/window layout (while respecting min height/width settings for various other windows). (Ctrl+W= resizes all windows to equal size, respecting the minimum height/width settings) Edit To the comment start vim (e.g. gvim /tmp/test.cpp) invoke help :help various-motions - opens a split window move help into separate tab maximized: C-wT enjoy reading the fine manual :) move the help back into the original tab: mAZZ<C-w>S`A mA: set global mark A ZZ: close help buffer/tab C-wS: split original window `A: jump to saved mark A You can avoid using a mark for normal (non-help) buffers. Let me know if you're interested.
Split
7,830,817
142
I have a string = "name"; I want to convert into a string array. How do I do it? Is there any java built in function? Manually I can do it but I'm searching for a java built in function. I want an array where each character of the string will be a string. like char 'n' will be now string "n" stored in an array.
To start you off on your assignment, String.split splits strings on a regular expression and this expression may be an empty string: String[] ary = "abc".split(""); Yields the array: (java.lang.String[]) [, a, b, c] Getting rid of the empty 1st entry is left as an exercise for the reader :-) Note: In Java 8, the empty first element is no longer included.
Split
3,413,586
142
I've got a table field membername which contains both the last name and the first name of users. Is it possible to split those into 2 fields memberfirst, memberlast? All the records have this format "Firstname Lastname" (without quotes and a space in between).
Unfortunately MySQL does not feature a split string function. However you can create a user defined function for this, such as the one described in the following article: MySQL Split String Function by Federico Cargnelutti With that function: DELIMITER $$ CREATE FUNCTION SPLIT_STR( x VARCHAR(255), delim VARCHAR(12), pos INT ) RETURNS VARCHAR(255) DETERMINISTIC BEGIN RETURN REPLACE(SUBSTRING(SUBSTRING_INDEX(x, delim, pos), LENGTH(SUBSTRING_INDEX(x, delim, pos -1)) + 1), delim, ''); END$$ DELIMITER ; you would be able to build your query as follows: SELECT SPLIT_STR(membername, ' ', 1) as memberfirst, SPLIT_STR(membername, ' ', 2) as memberlast FROM users; If you prefer not to use a user defined function and you do not mind the query to be a bit more verbose, you can also do the following: SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(membername, ' ', 1), ' ', -1) as memberfirst, SUBSTRING_INDEX(SUBSTRING_INDEX(membername, ' ', 2), ' ', -1) as memberlast FROM users;
Split
2,696,884
139
How to split a List of elements into lists with at most N items? ex: Given a list with 7 elements, create groups of 4, leaving the last group possibly with less elements. split(List(1,2,3,4,5,6,"seven"),4) => List(List(1,2,3,4), List(5,6,"seven"))
I think you're looking for grouped. It returns an iterator, but you can convert the result to a list, scala> List(1,2,3,4,5,6,"seven").grouped(4).toList res0: List[List[Any]] = List(List(1, 2, 3, 4), List(5, 6, seven))
Split
7,459,174
137
Please explain to me the working of strtok() function. The manual says it breaks the string into tokens. I am unable to understand from the manual what it actually does. I added watches on str and *pch to check its working when the first while loop occurred, the contents of str were only "this". How did the output shown below printed on the screen? /* strtok example */ #include <stdio.h> #include <string.h> int main () { char str[] ="- This, a sample string."; char * pch; printf ("Splitting string \"%s\" into tokens:\n",str); pch = strtok (str," ,.-"); while (pch != NULL) { printf ("%s\n",pch); pch = strtok (NULL, " ,.-"); } return 0; } Output: Splitting string "- This, a sample string." into tokens: This a sample string
the strtok runtime function works like this the first time you call strtok you provide a string that you want to tokenize char s[] = "this is a string"; in the above string space seems to be a good delimiter between words so lets use that: char* p = strtok(s, " "); what happens now is that 's' is searched until the space character is found, the first token is returned ('this') and p points to that token (string) in order to get next token and to continue with the same string NULL is passed as first argument since strtok maintains a static pointer to your previous passed string: p = strtok(NULL," "); p now points to 'is' and so on until no more spaces can be found, then the last string is returned as the last token 'string'. more conveniently you could write it like this instead to print out all tokens: for (char *p = strtok(s," "); p != NULL; p = strtok(NULL, " ")) { puts(p); } EDIT: If you want to store the returned values from strtok you need to copy the token to another buffer e.g. strdup(p); since the original string (pointed to by the static pointer inside strtok) is modified between iterations in order to return the token.
Split
3,889,992
136
I'm new to regular expressions and would appreciate your help. I'm trying to put together an expression that will split the example string using all spaces that are not surrounded by single or double quotes. My last attempt looks like this: (?!") and isn't quite working. It's splitting on the space before the quote. Example input: This is a string that "will be" highlighted when your 'regular expression' matches something. Desired output: This is a string that will be highlighted when your regular expression matches something. Note that "will be" and 'regular expression' retain the space between the words.
I don't understand why all the others are proposing such complex regular expressions or such long code. Essentially, you want to grab two kinds of things from your string: sequences of characters that aren't spaces or quotes, and sequences of characters that begin and end with a quote, with no quotes in between, for two kinds of quotes. You can easily match those things with this regular expression: [^\s"']+|"([^"]*)"|'([^']*)' I added the capturing groups because you don't want the quotes in the list. This Java code builds the list, adding the capturing group if it matched to exclude the quotes, and adding the overall regex match if the capturing group didn't match (an unquoted word was matched). List<String> matchList = new ArrayList<String>(); Pattern regex = Pattern.compile("[^\\s\"']+|\"([^\"]*)\"|'([^']*)'"); Matcher regexMatcher = regex.matcher(subjectString); while (regexMatcher.find()) { if (regexMatcher.group(1) != null) { // Add double-quoted string without the quotes matchList.add(regexMatcher.group(1)); } else if (regexMatcher.group(2) != null) { // Add single-quoted string without the quotes matchList.add(regexMatcher.group(2)); } else { // Add unquoted word matchList.add(regexMatcher.group()); } } If you don't mind having the quotes in the returned list, you can use much simpler code: List<String> matchList = new ArrayList<String>(); Pattern regex = Pattern.compile("[^\\s\"']+|\"[^\"]*\"|'[^']*'"); Matcher regexMatcher = regex.matcher(subjectString); while (regexMatcher.find()) { matchList.add(regexMatcher.group()); }
Split
366,202
135
This code almost does what I need it to.. for line in all_lines: s = line.split('>') Except it removes all the '>' delimiters. So, <html><head> Turns into ['<html','<head'] Is there a way to use the split() method but keep the delimiter, instead of removing it? With these results.. ['<html>','<head>']
d = ">" for line in all_lines: s = [e+d for e in line.split(d) if e]
Split
7,866,128
134
I am trying to split this string in python: 2.7.0_bf4fda703454 I want to split that string on the underscore _ so that I can use the value on the left side.
"2.7.0_bf4fda703454".split("_") gives a list of strings: In [1]: "2.7.0_bf4fda703454".split("_") Out[1]: ['2.7.0', 'bf4fda703454'] This splits the string at every underscore. If you want it to stop after the first split, use "2.7.0_bf4fda703454".split("_", 1). If you know for a fact that the string contains an underscore, you can even unpack the LHS and RHS into separate variables: In [8]: lhs, rhs = "2.7.0_bf4fda703454".split("_", 1) In [9]: lhs Out[9]: '2.7.0' In [10]: rhs Out[10]: 'bf4fda703454' An alternative is to use partition(). The usage is similar to the last example, except that it returns three components instead of two. The principal advantage is that this method doesn't fail if the string doesn't contain the separator.
Split
5,749,195
133
I was wondering if it was possible to split a file into equal parts (edit: = all equal except for the last), without breaking the line? Using the split command in Unix, lines may be broken in half. Is there a way to, say, split up a file in 5 equal parts, but have it still only consist of whole lines (it's no problem if one of the files is a little larger or smaller)? I know I could just calculate the number of lines, but I have to do this for a lot of files in a bash script. Many thanks!
If you mean an equal number of lines, split has an option for this: split --lines=75 If you need to know what that 75 should really be for N equal parts, its: lines_per_part = int(total_lines + N - 1) / N where total lines can be obtained with wc -l. See the following script for an example: #!/usr/bin/bash # Configuration stuff fspec=qq.c num_files=6 # Work out lines per file. total_lines=$(wc -l <${fspec}) ((lines_per_file = (total_lines + num_files - 1) / num_files)) # Split the actual file, maintaining lines. split --lines=${lines_per_file} ${fspec} xyzzy. # Debug information echo "Total lines = ${total_lines}" echo "Lines per file = ${lines_per_file}" wc -l xyzzy.* This outputs: Total lines = 70 Lines per file = 12 12 xyzzy.aa 12 xyzzy.ab 12 xyzzy.ac 12 xyzzy.ad 12 xyzzy.ae 10 xyzzy.af 70 total More recent versions of split allow you to specify a number of CHUNKS with the -n/--number option. You can therefore use something like: split --number=l/6 ${fspec} xyzzy. (that's ell-slash-six, meaning lines, not one-slash-six). That will give you roughly equal files in terms of size, with no mid-line splits. I mention that last point because it doesn't give you roughly the same number of lines in each file, more the same number of characters. So, if you have one 20-character line and 19 1-character lines (twenty lines in total) and split to five files, you most likely won't get four lines in every file.
Split
7,764,755
130
I have the following type of string var string = "'string, duppi, du', 23, lala" I want to split the string into an array on each comma, but only the commas outside the single quotation marks. I can't figure out the right regular expression for the split... string.split(/,/) will give me ["'string", " duppi", " du'", " 23", " lala"] but the result should be: ["string, duppi, du", "23", "lala"] Is there a cross-browser solution?
Disclaimer 2014-12-01 Update: The answer below works only for one very specific format of CSV. As correctly pointed out by DG in the comments, this solution does NOT fit the RFC 4180 definition of CSV and it also does NOT fit MS Excel format. This solution simply demonstrates how one can parse one (non-standard) CSV line of input which contains a mix of string types, where the strings may contain escaped quotes and commas. A non-standard CSV solution As austincheney correctly points out, you really need to parse the string from start to finish if you wish to properly handle quoted strings that may contain escaped characters. Also, the OP does not clearly define what a "CSV string" really is. First we must define what constitutes a valid CSV string and its individual values. Given: "CSV String" Definition For the purpose of this discussion, a "CSV string" consists of zero or more values, where multiple values are separated by a comma. Each value may consist of: A double quoted string. (may contain unescaped single quotes.) A single quoted string. (may contain unescaped double quotes.) A non-quoted string. (may NOT contain quotes, commas or backslashes.) An empty value. (An all whitespace value is considered empty.) Rules/Notes: Quoted values may contain commas. Quoted values may contain escaped-anything, e.g. 'that\'s cool'. Values containing quotes, commas, or backslashes must be quoted. Values containing leading or trailing whitespace must be quoted. The backslash is removed from all: \' in single quoted values. The backslash is removed from all: \" in double quoted values. Non-quoted strings are trimmed of any leading and trailing spaces. The comma separator may have adjacent whitespace (which is ignored). Find: A JavaScript function which converts a valid CSV string (as defined above) into an array of string values. Solution: The regular expressions used by this solution are complex. And (IMHO) all non-trivial regexes should be presented in free-spacing mode with lots of comments and indentation. Unfortunately, JavaScript does not allow free-spacing mode. Thus, the regular expressions implemented by this solution are first presented in native regex syntax (expressed using Python's handy: r'''...''' raw-multi-line-string syntax). First here is a regular expression which validates that a CVS string meets the above requirements: Regex to validate a "CSV string": re_valid = r""" # Validate a CSV string having single, double or un-quoted values. ^ # Anchor to start of string. \s* # Allow whitespace before value. (?: # Group for value alternatives. '[^'\\]*(?:\\[\S\s][^'\\]*)*' # Either Single quoted string, | "[^"\\]*(?:\\[\S\s][^"\\]*)*" # or Double quoted string, | [^,'"\s\\]*(?:\s+[^,'"\s\\]+)* # or Non-comma, non-quote stuff. ) # End group of value alternatives. \s* # Allow whitespace after value. (?: # Zero or more additional values , # Values separated by a comma. \s* # Allow whitespace before value. (?: # Group for value alternatives. '[^'\\]*(?:\\[\S\s][^'\\]*)*' # Either Single quoted string, | "[^"\\]*(?:\\[\S\s][^"\\]*)*" # or Double quoted string, | [^,'"\s\\]*(?:\s+[^,'"\s\\]+)* # or Non-comma, non-quote stuff. ) # End group of value alternatives. \s* # Allow whitespace after value. )* # Zero or more additional values $ # Anchor to end of string. """ If a string matches the above regex, then that string is a valid CSV string (according to the rules previously stated) and may be parsed using the following regex. The following regex is then used to match one value from the CSV string. It is applied repeatedly until no more matches are found (and all values have been parsed). Regex to parse one value from valid CSV string: re_value = r""" # Match one value in valid CSV string. (?!\s*$) # Don't match empty last value. \s* # Strip whitespace before value. (?: # Group for value alternatives. '([^'\\]*(?:\\[\S\s][^'\\]*)*)' # Either $1: Single quoted string, | "([^"\\]*(?:\\[\S\s][^"\\]*)*)" # or $2: Double quoted string, | ([^,'"\s\\]*(?:\s+[^,'"\s\\]+)*) # or $3: Non-comma, non-quote stuff. ) # End group of value alternatives. \s* # Strip whitespace after value. (?:,|$) # Field ends on comma or EOS. """ Note that there is one special case value that this regex does not match - the very last value when that value is empty. This special "empty last value" case is tested for and handled by the js function which follows. Example input and output: In the following examples, curly braces are used to delimit the {result strings}. (This is to help visualize leading/trailing spaces and zero-length strings.) // Return array of string values, or NULL if CSV string not well formed. function CSVtoArray(text) { var re_valid = /^\s*(?:'[^'\\]*(?:\\[\S\s][^'\\]*)*'|"[^"\\]*(?:\\[\S\s][^"\\]*)*"|[^,'"\s\\]*(?:\s+[^,'"\s\\]+)*)\s*(?:,\s*(?:'[^'\\]*(?:\\[\S\s][^'\\]*)*'|"[^"\\]*(?:\\[\S\s][^"\\]*)*"|[^,'"\s\\]*(?:\s+[^,'"\s\\]+)*)\s*)*$/; var re_value = /(?!\s*$)\s*(?:'([^'\\]*(?:\\[\S\s][^'\\]*)*)'|"([^"\\]*(?:\\[\S\s][^"\\]*)*)"|([^,'"\s\\]*(?:\s+[^,'"\s\\]+)*))\s*(?:,|$)/g; // Return NULL if input string is not well formed CSV string. if (!re_valid.test(text)) return null; var a = []; // Initialize array to receive values. text.replace(re_value, // "Walk" the string using replace with callback. function(m0, m1, m2, m3) { // Remove backslash from \' in single quoted values. if (m1 !== undefined) a.push(m1.replace(/\\'/g, "'")); // Remove backslash from \" in double quoted values. else if (m2 !== undefined) a.push(m2.replace(/\\"/g, '"')); else if (m3 !== undefined) a.push(m3); return ''; // Return empty string. }); // Handle special case of empty last value. if (/,\s*$/.test(text)) a.push(''); return a; }; console.log('Test 1: Test string from original question.'); console.log(CSVtoArray("'string, duppi, du', 23, lala")); console.log('Test 2: Empty CSV string.'); console.log(CSVtoArray("")); console.log('Test 3: CSV string with two empty values.'); console.log(CSVtoArray(",")); console.log('Test 4: Double quoted CSV string having single quoted values.'); console.log(CSVtoArray("'one','two with escaped \' single quote', 'three, with, commas'")); console.log('Test 5: Single quoted CSV string having double quoted values.'); console.log(CSVtoArray('"one","two with escaped \" double quote", "three, with, commas"')); console.log('Test 6: CSV string with whitespace in and around empty and non-empty values.'); console.log(CSVtoArray(" one , 'two' , , ' four' ,, 'six ', ' seven ' , ")); console.log('Test 7: Not valid'); console.log(CSVtoArray("one, that's me!, escaped \, comma")); Additional notes: This solution requires that the CSV string be "valid". For example, unquoted values may not contain backslashes or quotes, e.g. the following CSV string is NOT valid: var invalid1 = "one, that's me!, escaped \, comma" This is not really a limitation because any sub-string may be represented as either a single or double quoted value. Note also that this solution represents only one possible definition for: "Comma Separated Values". Edit: 2014-05-19: Added disclaimer. Edit: 2014-12-01: Moved disclaimer to top.
Split
8,493,195
128
According to the Hadoop - The Definitive Guide The logical records that FileInputFormats define do not usually fit neatly into HDFS blocks. For example, a TextInputFormat’s logical records are lines, which will cross HDFS boundaries more often than not. This has no bearing on the functioning of your program—lines are not missed or broken, for example—but it’s worth knowing about, as it does mean that data-local maps (that is, maps that are running on the same host as their input data) will perform some remote reads. The slight overhead this causes is not normally significant. Suppose a record line is split across two blocks (b1 and b2). The mapper processing the first block (b1) will notice that the last line doesn't have a EOL separator and fetches the remaining of the line from the next block of data (b2). How does the mapper processing the second block (b2) determine that the first record is incomplete and should process starting from the second record in the block (b2)?
Interesting question, I spent some time looking at the code for the details and here are my thoughts. The splits are handled by the client by InputFormat.getSplits, so a look at FileInputFormat gives the following info: For each input file, get the file length, the block size and calculate the split size as max(minSize, min(maxSize, blockSize)) where maxSize corresponds to mapred.max.split.size and minSize is mapred.min.split.size. Divide the file into different FileSplits based on the split size calculated above. What's important here is that each FileSplit is initialized with a start parameter corresponding to the offset in the input file. There is still no handling of the lines at that point. The relevant part of the code looks like this: while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) { int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining); splits.add(new FileSplit(path, length-bytesRemaining, splitSize, blkLocations[blkIndex].getHosts())); bytesRemaining -= splitSize; } After that, if you look at the LineRecordReader which is defined by the TextInputFormat, that's where the lines are handled: When you initialize your LineRecordReader it tries to instantiate a LineReader which is an abstraction to be able to read lines over FSDataInputStream. There are 2 cases: If there is a CompressionCodec defined, then this codec is responsible for handling boundaries. Probably not relevant to your question. If there is no codec however, that's where things are interesting: if the start of your InputSplit is different than 0, then you backtrack 1 character and then skip the first line you encounter identified by \n or \r\n (Windows) ! The backtrack is important because in case your line boundaries are the same as split boundaries, this ensures you do not skip the valid line. Here is the relevant code: if (codec != null) { in = new LineReader(codec.createInputStream(fileIn), job); end = Long.MAX_VALUE; } else { if (start != 0) { skipFirstLine = true; --start; fileIn.seek(start); } in = new LineReader(fileIn, job); } if (skipFirstLine) { // skip first line and re-establish "start". start += in.readLine(new Text(), 0, (int)Math.min((long)Integer.MAX_VALUE, end - start)); } this.pos = start; So since the splits are calculated in the client, the mappers don't need to run in sequence, every mapper already knows if it neds to discard the first line or not. So basically if you have 2 lines of each 100Mb in the same file, and to simplify let's say the split size is 64Mb. Then when the input splits are calculated, we will have the following scenario: Split 1 containing the path and the hosts to this block. Initialized at start 200-200=0Mb, length 64Mb. Split 2 initialized at start 200-200+64=64Mb, length 64Mb. Split 3 initialized at start 200-200+128=128Mb, length 64Mb. Split 4 initialized at start 200-200+192=192Mb, length 8Mb. Mapper A will process split 1, start is 0 so don't skip first line, and read a full line which goes beyond the 64Mb limit so needs remote read. Mapper B will process split 2, start is != 0 so skip the first line after 64Mb-1byte, which corresponds to the end of line 1 at 100Mb which is still in split 2, we have 28Mb of the line in split 2, so remote read the remaining 72Mb. Mapper C will process split 3, start is != 0 so skip the first line after 128Mb-1byte, which corresponds to the end of line 2 at 200Mb, which is end of file so don't do anything. Mapper D is the same as mapper C except it looks for a newline after 192Mb-1byte.
Split
14,291,170
125
I am wondering if I am going about splitting a string on a . the right way? My code is: String[] fn = filename.split("."); return fn[0]; I only need the first part of the string, that's why I return the first item. I ask because I noticed in the API that . means any character, so now I'm stuck.
split() accepts a regular expression, so you need to escape . to not consider it as a regex meta character. Here's an example : String[] fn = filename.split("\\."); return fn[0];
Split
3,387,622
125
I have a column in a pandas DataFrame that I would like to split on a single space. The splitting is simple enough with DataFrame.str.split(' '), but I can't make a new column from the last entry. When I .str.split() the column I get a list of arrays and I don't know how to manipulate this to get a new column for my DataFrame. Here is an example. Each entry in the column contains 'symbol data price' and I would like to split off the price (and eventually remove the "p"... or "c" in half the cases). import pandas as pd temp = pd.DataFrame({'ticker' : ['spx 5/25/2001 p500', 'spx 5/25/2001 p600', 'spx 5/25/2001 p700']}) temp2 = temp.ticker.str.split(' ') which yields 0 ['spx', '5/25/2001', 'p500'] 1 ['spx', '5/25/2001', 'p600'] 2 ['spx', '5/25/2001', 'p700'] But temp2[0] just gives one list entry's array and temp2[:][-1] fails. How can I convert the last entry in each array to a new column? Thanks!
Do this: In [43]: temp2.str[-1] Out[43]: 0 p500 1 p600 2 p700 Name: ticker So all together it would be: >>> temp = pd.DataFrame({'ticker' : ['spx 5/25/2001 p500', 'spx 5/25/2001 p600', 'spx 5/25/2001 p700']}) >>> temp['ticker'].str.split(' ').str[-1] 0 p500 1 p600 2 p700 Name: ticker, dtype: object
Split
12,504,976
122
I would like to count the number of lines in a string. I tried to use this stackoverflow answer, lines = str.split("\r\n|\r|\n"); return lines.length; on this string (which was originally a buffer): GET / HTTP/1.1 Host: localhost:8888 Connection: keep-alive Cache-Control: max-age=0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_2) AppleWebKit/535.2 (KHTML,like Gecko) Chrome/15.0.874.121 Safari/535.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and, for some reason, I got lines='1'. Any idea how to make it work?
Using a regular expression you can count the number of lines as str.split(/\r\n|\r|\n/).length Alternately you can try split method as below. var lines = $("#ptest").val().split("\n"); alert(lines.length); working solution: http://jsfiddle.net/C8CaX/
Split
8,488,729
121
Is there a function in python to split a word into a list of single letters? e.g: s = "Word to Split" to get wordlist = ['W', 'o', 'r', 'd', ' ', 't', 'o', ' ', 'S', 'p', 'l', 'i', 't']
>>> list("Word to Split") ['W', 'o', 'r', 'd', ' ', 't', 'o', ' ', 'S', 'p', 'l', 'i', 't']
Split
113,655
120
I've CSV file (around 10,000 rows ; each row having 300 columns) stored on LINUX server. I want to break this CSV file into 500 CSV files of 20 records each. (Each having same CSV header as present in original CSV) Is there any linux command to help this conversion?
Use the Linux split command: split -l 20 file.txt new Split the file "file.txt" into files beginning with the name "new" each containing 20 lines of text each. Type man split at the Unix prompt for more information. However you will have to first remove the header from file.txt (using the tail command, for example) and then add it back on to each of the split files.
Split
20,721,120
118
Is there any way to split strings in objective c into arrays? I mean like this - input string Yes:0:42:value into an array of (Yes,0,42,value)?
NSArray *arrayOfComponents = [yourString componentsSeparatedByString:@":"]; where yourString contains @"one:two:three" and arrayOfComponents will contain @[@"one", @"two", @"three"] and you can access each with NSString *comp1 = arrayOfComponents[0]; (https://developer.apple.com/documentation/foundation/nsstring/1413214-componentsseparatedbystring)
Split
3,558,888
118
Before Java 8 when we split on empty string like String[] tokens = "abc".split(""); split mechanism would split in places marked with | |a|b|c| because empty space "" exists before and after each character. So as result it would generate at first this array ["", "a", "b", "c", ""] and later will remove trailing empty strings (because we didn't explicitly provide negative value to limit argument) so it will finally return ["", "a", "b", "c"] In Java 8 split mechanism seems to have changed. Now when we use "abc".split("") we will get ["a", "b", "c"] array instead of ["", "a", "b", "c"]. My first guess was that maybe now leading empty strings are also removed just like trailing empty strings. But this theory fails, since "abc".split("a") returns ["", "bc"], so leading empty string was not removed. Can someone explain what is going on here? How rules of split have changed in Java 8?
The behavior of String.split (which calls Pattern.split) changes between Java 7 and Java 8. Documentation Comparing between the documentation of Pattern.split in Java 7 and Java 8, we observe the following clause being added: When there is a positive-width match at the beginning of the input sequence then an empty leading substring is included at the beginning of the resulting array. A zero-width match at the beginning however never produces such empty leading substring. The same clause is also added to String.split in Java 8, compared to Java 7. Reference implementation Let us compare the code of Pattern.split of the reference implemetation in Java 7 and Java 8. The code is retrieved from grepcode, for version 7u40-b43 and 8-b132. Java 7 public String[] split(CharSequence input, int limit) { int index = 0; boolean matchLimited = limit > 0; ArrayList<String> matchList = new ArrayList<>(); Matcher m = matcher(input); // Add segments before each match found while(m.find()) { if (!matchLimited || matchList.size() < limit - 1) { String match = input.subSequence(index, m.start()).toString(); matchList.add(match); index = m.end(); } else if (matchList.size() == limit - 1) { // last one String match = input.subSequence(index, input.length()).toString(); matchList.add(match); index = m.end(); } } // If no match was found, return this if (index == 0) return new String[] {input.toString()}; // Add remaining segment if (!matchLimited || matchList.size() < limit) matchList.add(input.subSequence(index, input.length()).toString()); // Construct result int resultSize = matchList.size(); if (limit == 0) while (resultSize > 0 && matchList.get(resultSize-1).equals("")) resultSize--; String[] result = new String[resultSize]; return matchList.subList(0, resultSize).toArray(result); } Java 8 public String[] split(CharSequence input, int limit) { int index = 0; boolean matchLimited = limit > 0; ArrayList<String> matchList = new ArrayList<>(); Matcher m = matcher(input); // Add segments before each match found while(m.find()) { if (!matchLimited || matchList.size() < limit - 1) { if (index == 0 && index == m.start() && m.start() == m.end()) { // no empty leading substring included for zero-width match // at the beginning of the input char sequence. continue; } String match = input.subSequence(index, m.start()).toString(); matchList.add(match); index = m.end(); } else if (matchList.size() == limit - 1) { // last one String match = input.subSequence(index, input.length()).toString(); matchList.add(match); index = m.end(); } } // If no match was found, return this if (index == 0) return new String[] {input.toString()}; // Add remaining segment if (!matchLimited || matchList.size() < limit) matchList.add(input.subSequence(index, input.length()).toString()); // Construct result int resultSize = matchList.size(); if (limit == 0) while (resultSize > 0 && matchList.get(resultSize-1).equals("")) resultSize--; String[] result = new String[resultSize]; return matchList.subList(0, resultSize).toArray(result); } The addition of the following code in Java 8 excludes the zero-length match at the beginning of the input string, which explains the behavior above. if (index == 0 && index == m.start() && m.start() == m.end()) { // no empty leading substring included for zero-width match // at the beginning of the input char sequence. continue; } Maintaining compatibility Following behavior in Java 8 and above To make split behaves consistently across versions and compatible with the behavior in Java 8: If your regex can match zero-length string, just add (?!\A) at the end of the regex and wrap the original regex in non-capturing group (?:...) (if necessary). If your regex can't match zero-length string, you don't need to do anything. If you don't know whether the regex can match zero-length string or not, do both the actions in step 1. (?!\A) checks that the string does not end at the beginning of the string, which implies that the match is an empty match at the beginning of the string. Following behavior in Java 7 and prior There is no general solution to make split backward-compatible with Java 7 and prior, short of replacing all instance of split to point to your own custom implementation.
Split
22,718,744
116
I am currently trying to split a string 1128-2 so that I can have two separate values. For example, value1: 1128 and value2: 2, so that I can then use each value separately. I have tried split() but with no success. Is there a specific way Grails handles this, or a better way of doing it?
Try: def (value1, value2) = '1128-2'.tokenize( '-' )
Split
16,450,680
116
I have a list: my_list = ['element1\t0238.94', 'element2\t2.3904', 'element3\t0139847'] How can I delete the \t and everything after to get this result: ['element1', 'element2', 'element3']
Something like: >>> l = ['element1\t0238.94', 'element2\t2.3904', 'element3\t0139847'] >>> [i.split('\t', 1)[0] for i in l] ['element1', 'element2', 'element3']
Split
6,696,027
110
I need to break apart a string that always looks like this: something -- something_else. I need to put "something_else" in another input field. Currently, this string example is being added to an HTML table row on the fly like this: tRow.append($('<td>').text($('[id$=txtEntry2]').val())); I figure "split" is the way to go, but there is very little documentation that I can find.
Documentation can be found e.g. at MDN. Note that .split() is not a jQuery method, but a native string method. If you use .split() on a string, then you get an array back with the substrings: var str = 'something -- something_else'; var substr = str.split(' -- '); // substr[0] contains "something" // substr[1] contains "something_else" If this value is in some field you could also do: tRow.append($('<td>').text($('[id$=txtEntry2]').val().split(' -- ')[0])));
Split
2,555,794
110
I'm using the pipeline plugin for jenkins and I'd like to generate code coverage report for each run and display it along with the pipeline ui. Is there a plugin I can use to do that(e.g. Cobertura but it doesn't seem to be supported by pipeline)?
There is a way to add a pipeline step to publish your coverage report but it doesn't show under the BlueOcean interface. It will show fine in the normal UI. pipeline { agent any stages { ... } post { always { junit '**/nosetests.xml' step([$class: 'CoberturaPublisher', autoUpdateHealth: false, autoUpdateStability: false, coberturaReportFile: '**/coverage.xml', failUnhealthy: false, failUnstable: false, maxNumberOfBuilds: 0, onlyStable: false, sourceEncoding: 'ASCII', zoomCoverageChart: false]) } } } Note that one of the parameters to the Cobertura plugin is the XML that it will use ('**/coverage.xml' in the example). If you are using python, you will want to use something like: nosetests --with-coverage --cover-xml --cover-package=pkg1,pkg2 --with-xunit test
Jenkins
36,918,370
55
I'm aware that if we use a iFrame in HTML we've to sandbox it & add the 'allow-scripts' permission to be true. But my problem is I don't have a iFrame at all in my pure Angular JS application. When I run it on my local machine it works fine. The moment I deploy it to my server, Chrome displays this error message along with the below error: Refused to load the style 'bootstrap.min.css' because it violates the following Content Security Policy directive: "style-src 'self'". Blocked script execution in 'dashboard.html' because the document's frame is sandboxed and the 'allow-scripts' permission is not set. I'm not invoking the page from a 3rd party site or elsewhere which could possibly inject my source & make it appear in a iframe. I inspected the code & I can confirm there are no iframes at all. BTW, I use a very old version of Chrome (26) and Firefox (10) [Organisational restrictions]. This happens on IE11 as well (Though no error message displayed) the page doesn't load up. What could be causing this ? Am I missing anything here ? Any pointers would be greatly appreciated. Below is a snapshot of what I'm trying to do... Trivial parts trimmed out.. <html lang="en" ng-app="dashboard"> <head> <title>Dashboard</title> <link href="css/bootstrap.min.css" rel="stylesheet"> <script src="js/jquery.min.js"></script> <script src="js/angular.min.js"></script> <script src="js/ui-bootstrap-tpls-0.6.0.js"></script> <script src="js/bootstrap.min.js"></script> <script src="js/notifications.js"></script> <style> body { background-color: #F3F3F4; color: #676a6c; font-size: 13px;} </style> <script> var dashboardApp = angular.module('dashboard', ['ui.bootstrap', 'notificationHelper']); Type = { APP : 0, CTL : 1 } function DashboardCtrl($scope, $location, $timeout, $http, $log, $q) { $scope.environments = [ { ... }]; $scope.columns = [ { ... } ]; $scope.Type = window.Type; $scope.applications = [{ ... }]; $scope.selectedEnv = null; var resetModel = function(applications) { applications.forEach(function(app) { var hosts=$scope.findHosts(app, $scope.selectedEnv); if(hosts){ hosts.forEach(function(host){ $scope.initStatus(app.status,host); }); } }); }; var timeoutPromise = null; $scope.initStatus = function (status,host) { status[host]=[{ ... }]; }; } </script> </head> <body ng-controller="DashboardCtrl"> <div class="request-notifications" ng-notifications></div> <div> <tabset> <tab ng-repeat="env in environments" heading="{{env.name}}" select="set(env)" active="env.tab_active"> <div class="col-md-6" ng-repeat="column in columns" ng-class="{'vertical-seperator':$first}"> <div class="panel" ng-class="{'first-child':$first}"> <div class="panel-heading"> <h3>{{column.column}}</h3> </div> <div class="panel-body"> <div class="frontends" ng-repeat="layer in column.layers"> <h4>{{layer.name}}</h4> <div class="category" ng-repeat="category in layer.categories" ng-class="category.css"> <div class="category-heading"> <h4>{{category.name}}</h4> </div> <div class="category-body group" ng-repeat="group in category.groups"> <div ng-if="!env[group.host]"> <h4>{{group.name}}</h4> <span class="label label-danger">Not deployed</span> </div> <div ng-repeat="host in env[group.host]"> <div class="group-info"> <div class="group-name">{{group.name}}</div> <div class="group-node"><strong>Node : </strong>{{host}}</div> </div> <table class="table table-striped"> <thead> <tr> ... </tr> </thead> <tbody> <tr class="testStatusPage" ng-repeat="app in apps | filter: { column: column.column, layer: layer.name, category: category.name, group: group.name } : true"> <!-- Application Home Links --> <td class="user-link" ng-if="app.type === Type.A || app.type === Type.A1 || app.type === Type.B || app.type === Type.B1 || app.type === Type.C"><a href="{{app.link}}">{{app.text}}</a></td> <td ng-if="app.status[host].statusCode == 0" class="result statusResult"><span class="label label-success">Success</span></td> <td ng-if="app.status[svr].status != null && app.status[host].status != 0" class="result statusResult"><span class="label label-danger">{{app.status[host].error}}</span></td> </tr> </tbody> </table> </div> </div> </div> </div> </div> </div> </div> </tab> </tabset> </div> </body> </html>
We were using this content HTML in a Jenkins userContent directory. We recently upgraded to the latest Jenkins 1.625 LTS version & it seems they've introduced new Content security policy which adds the below header to the response headers & the browsers simply decline to execute anything like stylesheets / Javascripts. X-Content-Security-Policy: sandbox; default-src 'none'; img-src 'self'; style-src 'self'; To get over it, we had to simply remove this header by resetting the below property in Jenkins. System.setProperty("hudson.model.DirectoryBrowserSupport.CSP", "") Those who upgrade to Jenkins 1.625 & use the userContent folder might be affected by this change. For more information refer https://wiki.jenkins-ci.org/display/JENKINS/Configuring+Content+Security+Policy
Jenkins
34,315,723
55
I am running Jenkins version 1.411 and use Maven for building. Even though the application builds successfully, Jenkins treats it as an unstable build. I have disabled all tests to isolate the problem. [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 45.389s [INFO] Finished at: Wed May 11 12:16:57 EST 2011 [INFO] [DocLinks] Skip document Adaptiv generated site ... (No such directory: site) Final Memory: 27M/543M [INFO] ------------------------------------------------------------------------ channel stopped Archiving artifacts Email was triggered for: Unstable Sending email for trigger: Unstable An attempt to send an e-mail to empty list of recipients, ignored. Finished: SUCCESS
It's some time ago that I used hudson/jenkins but you should have a look at the Jenkins Glossary Unstable build: A build is unstable if it was built successfully and one or more publishers report it unstable. For example if the JUnit publisher is configured and a test fails then the build will be marked unstable. Publisher: A publisher is part of the build process other than compilation, for example JUnit test runs. A publisher may report stable or unstable result depending on the result of its processing. For example, if a JUnit test fails, then the whole JUnit publisher may report unstable. So I suppose you have some other build parts (apart from JUnit) that report an unstable result. Have a look at the whole build process log.
Jenkins
5,958,592
55
I know I can call localhost/job/RSpec/lastBuild/api/json to get the status of the lastest Jenkins build. However, since our build runs very long (a couple hours), I'm really more interested in the last complete build status than the run that is running at this exact moment. Is there an API end point for the last fully run build? Or should I instead pull the full list of builds and select the next-to-last one from there?
Try http://$host/job/$jobname/lastSuccessfulBuild/api/json Jenkins (and Hudson) expose multiple different builds, such as lastBuild, lastStableBuild, lastSuccessfulBuild, lastFailedBuild, lastUnstableBuild, lastUnsuccessfulBuild, lastCompletedBuild.
Jenkins
18,238,616
54
I would like to set the build name and description from a Jenkins Declarative Pipeline, but can't find the proper way of doing it. I tried using an environment bracket after the pipeline, using a node bracket in an agent bracket, etc. I always get syntax error. The last version of my Jenkinsfile goes like so: pipeline { stages { stage("Build") { steps { echo "Building application..." bat "%ANT_HOME%/bin/ant.bat clean compile" currentBuild.name = "MY_VERSION_NUMBER" currentBuild.description = "MY_PROJECT MY_VERSION_NUMBER" } } stage("Unit Tests") { steps { echo "Testing (JUnit)..." echo "Testing (pitest)..." bat "%ANT_HOME%/bin/ant.bat run-unit-tests" } } stage("Functional Test") { steps { echo "Selenium..." } } stage("Performance Test") { steps { echo "JMeter.." } } stage("Quality Analysis") { steps { echo "Running SonarQube..." bat "%ANT_HOME%/bin/ant.bat run-sonarqube-analysis" } } stage("Security Assessment") { steps { echo "ZAP..." } } stage("Approval") { steps { echo "Approval by a CS03" } } stage("Deploy") { steps { echo "Deploying..." } } } post { always { junit '/test/reports/*.xml' } failure { emailext attachLog: true, body: '', compressLog: true, recipientProviders: [[$class: 'CulpritsRecipientProvider'], [$class: 'DevelopersRecipientProvider']], subject: '[JENKINS] MY_PROJECT build failed', to: '...recipients...' } success { emailext attachLog: false, body: '', compressLog: false, recipientProviders: [[$class: 'DevelopersRecipientProvider']], subject: '[JENKINS] MY_PROJECT build succeeded', to: '...recipients...' } } } Error is: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: WorkflowScript: 11: Expected a step @ line 11, column 5. currentBuild.name = "MY_VERSION_NUMBER" ^ WorkflowScript: 12: Expected a step @ line 12, column 5. currentBuild.description = "MY_PROJECT MY_VERSION_NUMBER" ^ Ideally, I'd like to be able to read MY_PROJECT and MY_VERSION_NUMBER from the build.properties file, or from the Jenkins build log. Any guidance about that requirement would be appreciated as well. UPDATE Based on the answer I had below, the following worked: stage("Build") { steps { echo "Building application..." bat "%ANT_HOME%/bin/ant.bat clean compile" script { def props = readProperties file: 'build.properties' currentBuild.displayName = "v" + props['application.version'] } } Now the build version is automatically set during the pipeline by reading the build.properties file.
I think this will do what you want. I was able to do it inside a script block: pipeline { stages { stage("Build"){ steps { script { currentBuild.displayName = "The name." currentBuild.description = "The best description." } ... do whatever. } } } } The script is kind of an escape hatch to get out of a declarative pipeline. There is probably a declarative way to do it but i couldn't find it. And one more note. I think you want currentBuild.displayName instead of currentBuild.name In the documentation for Jenkins globals I didn't see a name property under currentBuild.
Jenkins
43,639,099
54
I want to access and grep Jenkins Console Output as a post build step in the same job that creates this output. Redirecting logs with >> log.txt is not a solution since this is not supported by my build steps. Build: echo "This is log" Post build step: grep "is" path/to/console_output Where is the specific log file created in filesystem?
@Bruno Lavit has a great answer, but if you want you can just access the log and download it as txt file to your workspace from the job's URL: ${BUILD_URL}/consoleText Then it's only a matter of downloading this page to your ${Workspace} You can use "Invoke ANT" and use the GET target On Linux you can use wget to download it to your workspace etc. Good luck! Edit: The actual log file on the file system is not on the slave, but kept in the Master machine. You can find it under: $JENKINS_HOME/jobs/$JOB_NAME/builds/lastSuccessfulBuild/log If you're looking for another build just replace lastSuccessfulBuild with the build you're looking for.
Jenkins
37,386,581
54
Pretty new to Jenkins and I have simple yet annoying problem. When I run job (Build) on Jenkins I am triggering ruby command to execute my test script. Problem is Jenkins is not displaying output in real time from console. Here is trigger log. Building in workspace /var/lib/jenkins/workspace/foo_bar No emails were triggered. [foo_bar] $ /bin/sh -xe /tmp/hudson4042436272524123595.sh + ruby /var/lib/jenkins/test-script.rb Basically it hangs on this output until build is complete than it just shows full output. Funny thing is this is not consistent behavior, sometimes it works as it should. But most of the time there is no real time console output. Jenkins version: 1.461
To clarify some of the answers. ruby or python or any sensible scripting language will buffer the output; this is in order to minimize the IO; writing to disk is slow, writing to a console is slow... usually the data gets flush()'ed automatically after you have enough data in the buffer with special handling for newlines. e.g. writing a string without newline then sleep() would not write anything until after the sleep() is complete (I'm only using sleep as an example, feel free to substitute with any other expensive system call). e.g. this would wait 8 seconds, print one line, wait 5 more seconds, print a second line. from time import sleep def test(): print "ok", time.sleep(3) print "now", time.sleep(5) print "done" time.sleep(5) print "again" test() for ruby, STDOUT.sync = true, turns the autoflush on; all writes to STDOUT are followed by flush(). This would solve your problem but result in more IO. STDOUT.sync = true for python, you can use python -u or the environment variable PYTHONUNBUFFERED to make stdin/stdout/stout not buffered, but there are other solutions that do not change stdin or stderr export PYTHONUNBUFFERED=1 for perl, you have autoflush autoflush STDOUT 1;
Jenkins
11,631,951
54
I am trying to setup jenkins, but I cant get the authentication to work. I am running jenkins on Tomcat6 on CentOS 6.2. I enable logging in, and everything goes fine until I try to log in. After giving my credential and pressing login, tomcat gives me a error: "HTTP Status 404 - The requested resource () is not available." on http://myserver:8080/jenkins/j_acegi_security_check By googling I can find this: https://issues.jenkins-ci.org/browse/JENKINS-3761 Two suggested fixes I have found: Run jenkins on tomcat instead of running the standalone version - I am already doing so. Edit a file: WEB-INF/security/SecurityFilters.groovy - I tried to edit, but I can't get it to change anything Is there something I could do to make this work?
Spent ages wrestling with this one, make sure a Security Realm is set when you are choosing your Authorization method in Jenkins. That is, in Manage Jenkins → Configure Global Security select an option in the Security Realm list. For example:
Jenkins
9,684,320
54
I trigger a shell script from Jenkins, This scripts get date and export it as a environment(Linux) variable $DATE. I need to use this $DATE inside same Jenkins job. I made job as parameter build. Created a string parameter as DATE value as DATE=$DATE. But it is not working. Please suggest !!
You mention that you are exporting a DATE environment variable in an shell script, which is presumably being started via an "Execute shell" step. The problem is, once the shell step has completed, that environment is gone — the variables will not be carried over to subsequent build steps. So when you later try to use the $DATE value — whether in another build step, or as a parameter to another job — that particular environment variable will no longer exist. What you can do instead is use the EnvInject plugin to export environment variables during a build. Variables set up using this plugin will be available to all subsequent build steps. For example, you could write the DATE to a properties field in one build step: echo DATE=$(date +%Y-%m-%d) > env.properties Then you can add an "Inject environment variables for your job" build step, and enter env.properties in the "Environment Properties File Path" field. That way, the DATE variable (and anything else in that properties file) will be exported and will be visible to the rest of the build steps.
Jenkins
30,110,876
53
We are having the same issue found here, here, here and here Basically we upgraded to xcode 6.1 and our build are getting the "ResourceRules.plist: cannot read resources" error. We have a Jenkins server that does our ios builds for us. We are using the Xcode plugin on Jenkins to do the actual build and signing. Any thoughts on how we can make this change without manually opening xcode and doing this solution found on the other answers: Click on your project > Targets > Select your target > Build Settings > Code Signing Resource Rules Path and add : $(SDKROOT)/ResourceRules.plist I'm very new to Xcode and iOS build in general. I have found the project.pbxproj file inside the Unity-iPhone.xcodeproj file. It looks like this contains the build settings under the /* Begin XCBuildConfiguration section */ section it lists what looks like similar build properties foundin Xcode, however I do not see anything like "Code Signing Resource Rules Path". Does anyone have experience manually editing this file? Is that a bad idea in general? Thanks
If you're using Jenkins with the XCode plugin, you can modify the 'Code Signing Resource Rules Path' variable by adding: "CODE_SIGN_RESOURCE_RULES_PATH=$(SDKROOT)/ResourceRules.plist" to the 'Custom xcodebuild arguments' setting for the XCode plugin. This fix does not require the XCode GUI.
Jenkins
26,516,442
53
How can I run a cron job every 15 mins on Jenkins? This is what I've tried : On Jenkins I have a job set to run every 15 mins using this cron syntax : 14 * * * * But the job executes every hour instead of 15 mins. I'm receiving a warning about the format of the cron syntax : Spread load evenly by using ‘H * * * *’ rather than ‘14 * * * *’ Could this be the reason why the cron job executes every hour instead of 15 mins ?
Your syntax is slightly wrong. Say: */15 * * * * command | |--> `*/15` would imply every 15 minutes. * indicates that the cron expression matches for all values of the field. / describes increments of ranges.
Jenkins
19,443,732
53
I'm running Ubuntu 11.10 and have run sudo apt-get install jenkins to install Jenkins on this system. I've seen some tutorials on how to setup a reverse proxy (Apache, Nginx, etc), however this is a VM dedicated for just jenkins and I'd like keep it as lean as possible while having jenkins running on port 80. I've found the upstart config in /etc/init/jenkins.conf and modified the port to 80 env HTTP_PORT=80 When I start jenkins via service jenkins start, ps reveals that it runs for a few seconds then terminates. Is this because jenkins is running as the jenkins user on a privileged port? If so, how do I fix this? Any other ideas a welcome. Here is the upstart config: description "jenkins: Jenkins Continuous Integration Server" author "James Page <[email protected]>" start on (local-filesystems and net-device-up IFACE!=lo) stop on runlevel [!2345] env USER="jenkins" env GROUP="jenkins" env JENKINS_LOG="/var/log/jenkins" env JENKINS_ROOT="/usr/share/jenkins" env JENKINS_HOME="/var/lib/jenkins" env JENKINS_RUN="/var/run/jenkins" env HTTP_PORT=80 env AJP_PORT=-1 env JAVA_OPTS="" env JAVA_HOME="/usr/lib/jvm/default-java" limit nofile 8192 8192 pre-start script test -f $JENKINS_ROOT/jenkins.war || { stop ; exit 0; } $JENKINS_ROOT/bin/maintain-plugins.sh mkdir $JENKINS_RUN > /dev/null 2>&1 || true chown -R $USER:$GROUP $JENKINS_RUN || true end script script JENKINS_ARGS="--webroot=$JENKINS_RUN/war --httpPort=$HTTP_PORT --ajp13Port=$AJP_PORT" exec daemon --name=jenkins --inherit --output=$JENKINS_LOG/jenkins.log --user=$USER \ -- $JAVA_HOME/bin/java $JAVA_OPTS -jar $JENKINS_ROOT/jenkins.war $JENKINS_ARGS \ --preferredClassLoader=java.net.URLClassLoader end script
Another solution is to simply use iptables to reroute incoming traffic from 80 to 8080. The rules would look like: -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT -A INPUT -i eth0 -p tcp --dport 8080 -j ACCEPT -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080 Reformatted as an iptables.rules file: *filter :INPUT ACCEPT [100:100000] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [95:9000] -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT -A INPUT -i eth0 -p tcp --dport 8080 -j ACCEPT COMMIT *nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080 COMMIT The advantage of a iptable.rules file is the rules can persist after reboots. Just make sure to integrate any other current iptable rules into the same file! On Redhat/CentOS this file can go in /etc/sysconfig/iptables. On Debian/Ubuntu systems they can be saved in /etc/iptables/rules.v4 by using the iptables-persistent package. Or the iptable.rules can be called by modifying /etc/network/interfaces or hooking into if-up/if-down scripts. The Ubuntu Community wiki has a great page explaining these methods. As is usually the case with networking, there's a lot of different ways to accomplish the same result. Use what works best for you!
Jenkins
9,330,367
53
I'm hoping to add a conditional stage to my Jenkinsfile that runs depending on how the build was triggered. Currently we are set up such that builds are either triggered by: changes to our git repo that are picked up on branch indexing a user manually triggering the build using the 'build now' button in the UI. Is there any way to run different pipeline steps depending on which of these actions triggered the build?
In Jenkins Pipeline without currentBuild.rawBuild access the build causes could be retrieved in the following way: // started by commit currentBuild.getBuildCauses('jenkins.branch.BranchEventCause') // started by timer currentBuild.getBuildCauses('hudson.triggers.TimerTrigger$TimerTriggerCause') // started by user currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause') You can get a boolean value with: isTriggeredByTimer = !currentBuild.getBuildCauses('hudson.triggers.TimerTrigger$TimerTriggerCause').isEmpty() Or, as getBuildCauses() returns an array, the array's size will work correctly with Groovy truthy semantics: if (currentBuild.getBuildCauses('hudson.triggers.TimerTrigger$TimerTriggerCause')) {
Jenkins
43,597,803
52
I am using pipeline jobs with Jenkins 2.0, but I don't see option 'disable job' as I was used to in older Jenkins versions. Am I missing something? Is it still possible to disable (pipeline) job?
You can simply use the "Disable Project" option from Jenkins 2.89.4 onward in order to disable the pipeline Jobs.
Jenkins
38,785,926
52
I'm setting up Jenkins to automate the build process. In particular, for my needs, I'd like to be able to set different bundle identifiers. I'm using the Xcode Jenkins plugin to set the bundle identifier: The problem is that this will change the bundle identifier in the Info.plist file and in MyTarget > General > Bundle Identifier. But it won't change the bundle identifier in Build Settings > Packaging > Product Bundle Identifier. The same thing happens if I do it manually. I create a new project in Xcode 7. By default, the three values are: When I change the value in the Info.plist file like this: The other two value will be: So how you can see the value in Build Settings is not changing. If I'm in Xcode I change that value manually, but if I'm building the project in Jenkins this is a bis issue. Anyone encountered the same problem? How do you tackle it? Thanks!
Faced the same problem. The PRODUCT_BUNDLE_IDENTIFIER is a variable in your project.pbxproj file. Change that to whatever you want and it will reflect both in your Info.plist as well as the project settings.
Jenkins
32,862,253
52
I don't know why "logged in users can do anything" means Jenkins will happily allow non-authenticated users to view project details and access artifacts... Regardless, I need to know how to get Jenkins to allow logged in users to to anything AND hide EVERYTHING for users who AREN'T logged in. Help please?
This can be done with the Role-Strategy plugin. Install the plugin, add a new group called "Anonymous" and uncheck everything. Then you want to add another group called "authenticated" and check everything. Add your existing users to this group. Jenkins will immediately prompt you for a login this way.
Jenkins
14,226,681
52
I delete old jenkins builds with rm where job is hosted: my_job/builds/$ rm -rf [1-9]* These old builds are still visible in job page. How to remove them with command line? (without the delete button in each build user interface)
Here is another option: delete the builds remotely with cURL. (Replace the beginning of the URLs with whatever you use to access Jenkins with your browser.) $ curl -X POST http://jenkins-host.tld:8080/jenkins/job/myJob/[1-56]/doDeleteAll The above deletes build #1 to #56 for job myJob. If authentication is enabled on the Jenkins instance, a user name and API token must be provided like this: $ curl -u userName:apiToken -X POST http://jenkins-host.tld:8080/jenkins/job/myJob/[1-56]/doDeleteAll The API token must be fetched from the /me/configure page in Jenkins. Just click on the "Show API Token..." button to display both the user name and the API token. Edit: As pointed out by yegeniy in a comment below, one might have to replace doDeleteAll by doDelete in the URLs above to make this work, depending on the configuration.
Jenkins
13,052,390
52
I've installed Jenkins on my mac (osx lion). But I couldn't get it work. This is the stacktrace I've got: Started by user anonymous Checkout:workspace / /Users/Shared/Jenkins/Home/jobs/test/workspace - hudson.remoting.LocalChannel@1c0a0847 Using strategy: Default Checkout:workspace / /Users/Shared/Jenkins/Home/jobs/test/workspace - hudson.remoting.LocalChannel@1c0a0847 Cloning the remote Git repository Cloning repository origin Error trying to determine the git version: Error performing command: /usr/local/git/ --version Cannot run program "/usr/local/git/" (in directory "/Users/Shared/Jenkins/Home/jobs/test/workspace"): error=13, Permission denied Assuming 1.6 ERROR: Error cloning remote repo 'origin' : Could not clone [email protected]:iRest.git ERROR: Cause: Error performing command: /usr/local/git/ clone -o origin [email protected]:iRest.git /Users/Shared/Jenkins/Home/jobs/test/workspace Cannot run program "/usr/local/git/": error=13, Permission denied Trying next repository ERROR: Could not clone repository FATAL: Could not clone hudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1046) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:972) at hudson.FilePath.act(FilePath.java:783) at hudson.FilePath.act(FilePath.java:765) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:972) at hudson.model.AbstractProject.checkout(AbstractProject.java:1195) at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:571) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:460) at hudson.model.Run.run(Run.java:1404) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:238)
The solution for me was to set the git path in the Manage Jenkins > Global Tool Configuration settings. In the Git section, I changed the Path to Git executable to /usr/local/bin/git.
Jenkins
8,639,501
52
Here's my Jenkins 2.x pipeline: node ('master'){ stage 'Checkout' checkout scm stage "Build Pex" sh('build.sh') } When I run this pipeline the checkout puts the code into to the workspace as expected, however instead of expecting to find the script in workspace/ (it's really there!), it looks in an unrelated directory: workspace@tmp/durable-d812f509. Entering stage Build Pex Proceeding [Pipeline] sh [workspace] Running shell script + build.sh /home/conmonsysdev/deployments/jenkins_ci_2016_interns/jenkins_home/jobs/pex/branches/master/workspace@tmp/durable-d812f509/script.sh: line 2: build.sh: command not found How do I modify this Jenkinsfile so that build.sh is executed in the exact same directory as where I checked out the project source code?
You can enclose your actions in dir block. checkout scm stage "Build Pex" dir ('<your new directory>') { sh('./build.sh') } ... or .. checkout scm stage "Build Pex" sh(""" <path to your new directory>/build.sh""") ... <your new directory> is place holder your actual directory. By default it is a relative path to workspace. You can define absolute path, if you are sure this is present on the agent.
Jenkins
38,143,485
51
Can anyone suggest if there is a way to execute Jacoco in a Jenkins Pipeline? I have downloaded the plugin but I do not get the option for Jacoco in the 'Pipeline Syntax', which is the Pipeline script help . Referred this URL: https://wiki.jenkins-ci.org/display/JENKINS/JaCoCo+Plugin which has no information for a jenkins jacoco pipeline
The jacoco pipeline step configuration uses this format: step([$class: 'JacocoPublisher', execPattern: 'target/*.exec', classPattern: 'target/classes', sourcePattern: 'src/main/java', exclusionPattern: 'src/test*' ]) Or with a simpler syntax for declarative pipeline: jacoco( execPattern: 'target/*.exec', classPattern: 'target/classes', sourcePattern: 'src/main/java', exclusionPattern: 'src/test*' ) You can find more options in the JaCoCo Pipeline Steps Reference
Jenkins
41,893,846
51
I have installed the Docker build step plugin for Jenkins. The documentation is telling me: Name : Choose a name for this Docker cloud provider Docker URL: The URL to use to access your Docker server API (e.g: http://172.16.42.43:4243) How can I find my URL to the REST API (I have Docker installed on my host)?
If you are on Linux and need to connect to Docker API on the local machine, its URL is probably unix:///var/run/docker.sock, like it is mentioned in documentation: Develop with Docker Engine SDKs and API By default the Docker daemon listens on unix:///var/run/docker.sock and the client must have root access to interact with the daemon. If a group named docker exists on your system, docker applies ownership of the socket to the group. This might be helpful if you are connecting to Docker from a JetBrains IDE.
Jenkins
37,178,824
51
I updated some plugins and restarted the jenkins but now it says: Please wait while Jenkins is restarting Your browser will reload automatically when Jenkins is ready. It is taking too much time (waiting from last 40 minutes). I have only 1 project with around 20 builds. I have restarted jenkins many times and worked fine but now it stucks. Is there any way out to kill/suspend jenkins to avoid this wait?
I had a very similar issue when using jenkins build-in restart function. To fix it I killed the service (with crossed fingers), but somehow it kept serving the "Please wait" page. I guess it is served by a separate thread, but since i could not see any running java or jenkins processes i restarted the server to stop it. After reboot jenkins worked but it was not updated. To make it work it I ran the update again and restarted the jenkins service manually - it took less than a minute and worked just fine... Jenkins seems to have a number of bugs related to restarting, and at least one unresolved: jenkins issue
Jenkins
17,344,061
51
I am currently seeing a set of errors across my builds. Is this expected behaviour if you loose Jenkins (say to a box crash, or a kill -9)? Or is there something worse going on (like a bad network connection)? The stack and error is: hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:158) at $Proxy175.join(Unknown Source) at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:861) at hudson.Launcher$ProcStarter.join(Launcher.java:345) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:82) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:703) at hudson.model.Build$RunnerImpl.build(Build.java:178) at hudson.model.Build$RunnerImpl.doRun(Build.java:139) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:473) at hudson.model.Run.run(Run.java:1410) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:238) Caused by: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel at hudson.remoting.Request.abort(Request.java:273) at hudson.remoting.Channel.terminate(Channel.java:732) at hudson.remoting.Channel$ReaderThread.run(Channel.java:1157) Caused by: java.io.IOException: Unexpected termination of the channel at hudson.remoting.Channel$ReaderThread.run(Channel.java:1133) Caused by: java.io.EOFException at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2554) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1297) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:351) at hudson.remoting.Channel$ReaderThread.run(Channel.java:1127)
You'll see that error if the Jenkins master loses connectivity with the slave. It could be due to any of the three issues you listed above: Manually killing the slave process The slave server becoming unavailable A network problem between the master and the slave
Jenkins
9,998,298
51
In relation to Jenkins DSL, what is the difference between: def cwd = pwd() and cwd = pwd() ?
It's a difference of scope. When you assign a value to a variable without a "def" or other type, in a Groovy script, it's added to the "binding", the global variables for the script. That means it can be accessed from all functions within the script. It's a lot like if you had the variable defined at the top of the script. You could wind up with unexpected behavior if multiple threads are acting on the script. def a = { x = 1 println x } def b = { x = 2 println x } new Thread(a).start() new Thread(b).start() ... could produce two ones, two twos, or a mix. In contrast, using "def" makes a local variable: def a = { def x = 1 println x } def b = { def x = 2 println x } new Thread(a).start() new Thread(b).start() ... will always print a 1 and a 2, in arbitrary order.
Jenkins
39,514,795
50
I am doing a simple pipeline: Build -> Staging -> Production I need different environment variables for staging and production, so i am trying to source variables. sh 'source $JENKINS_HOME/.envvars/stacktest-staging.sh' But it returns Not found [Stack Test] Running shell script + source /var/jenkins_home/.envvars/stacktest-staging.sh /var/jenkins_home/workspace/Stack Test@tmp/durable-bcbe1515/script.sh: 2: /var/jenkins_home/workspace/Stack Test@tmp/durable-bcbe1515/script.sh: source: not found The path is right, because i run the same command when i log via ssh, and it works fine. Here is the pipeline idea: node { stage name: 'Build' // git and gradle build OK echo 'My build stage' stage name: 'Staging' sh 'source $JENKINS_HOME/.envvars/stacktest-staging.sh' // PROBLEM HERE echo '$DB_URL' // Expects http://production_url/my_db sh 'gradle flywayMigrate' // To staging input message: "Does Staging server look good?" stage name: 'Production' sh 'source $JENKINS_HOME/.envvars/stacktest-production.sh' echo '$DB_URL' // Expects http://production_url/my_db sh 'gradle flywayMigrate' // To production sh './deploy.sh' } What should i do? I was thinking about not using pipeline (but i will not be able to use my Jenkinsfile). Or make different jobs for staging and production, using EnvInject Plugin (But i lose my stage view) Or make withEnv (but the code gets big, because today i am working with 12 env vars)
One way you could load environment variables from a file is to load a Groovy file. For example: Let's say you have a groovy file in '$JENKINS_HOME/.envvars' called 'stacktest-staging.groovy'. Inside this file, you define 2 environment variables you want to load env.DB_URL="hello" env.DB_URL2="hello2" You can then load this in using load "$JENKINS_HOME/.envvars/stacktest-staging.groovy" Then you can use them in subsequent echo/shell steps. For example, here is a short pipeline script: node { load "$JENKINS_HOME/.envvars/stacktest-staging.groovy" echo "${env.DB_URL}" echo "${env.DB_URL2}" }
Jenkins
39,171,341
50
I have installed Jenkins plugins in two ways i.e. manually keeping the .hpi file in Jenkins home directory, and installing from Jenkins front-end (Manage Jenkins > Manage Plugins). What I notice here is when I install the plugin manually (downloaded as .hpi file) it installed with extension .hpi and while installing the plugin through Jenkins front-end I notice that plugin again installed as .jpi. But why? What is going on in the background? I know functionality won't change but it looks interesting to know.
Both are supposed to be identical to that extend that Jenkins is renaming hpi to jpi when you install it manually as you said. The reason why you see both in your JENKINS_HOME is the order in which plugins are loaded when Jenkins boots up: plugin.jpi gets precedence over plugin.hpi in case both are present. This is the way the upload installation makes sure the uploaded version will override the existing one after the restart.
Jenkins
30,658,375
50
We've recently set up a Jenkins CI server on Windows. Now in order to use Active Directory authentication I'd like to require https (SSL/TLS) for access. Given this setup, what is the recommended way to do this?
Go to your %JENKINS_HOME% and modify the jenkins.xml. Where you see --httpPort=8080 change it to --httpPort=-1 --httpsPort=8080 you can make the ports anything you want of course, but in my testing (a while ago, it may have changed) if you don't keep --httpPort=<something> then Jenkins will always use 8080. So if you simply change --httpPort=8080 to --httpsPort=8080, port 8080 will still use http. Also, if you want to use your own certificate, there are some instructions at the bottom of this page. http://wiki.jenkins-ci.org/display/JENKINS/Starting+and+Accessing+Jenkins
Jenkins
5,313,703
50
We have a .net full framework WPF application that we've moved from .net 4.6.2 to 4.7.1 along with changing to PackageReference in the csproj file instead of packages.config. Building on the development machines appears to be fine and packages are downloaded and restored, but when we build on our Windows Server 2012 build server with Jenkins, the nuget packages don't seem to be restored correctly. We're using MSBuild v15.5 with the latest "msbuild /restore" command to restore packages at build time. Note: Using the previous way of calling "nuget restore" does work, but we should be able to use msbuild /restore now. The package restore process appears to be looking at the correct NuGet servers and appears to go through the restore without errors (this is a test solution compiled on Jenkins to isolate the issue): Restore: Restoring packages for c:\Jenkins\workspace\Test\ConsoleApp1\ConsoleApp1.csproj... Committing restore... Generating MSBuild file c:\Jenkins\workspace\Test\ConsoleApp1\obj\ConsoleApp1.csproj.nuget.g.props. Generating MSBuild file c:\Jenkins\workspace\Test\ConsoleApp1\obj\ConsoleApp1.csproj.nuget.g.targets. Writing lock file to disk. Path: c:\Jenkins\workspace\Test\ConsoleApp1\obj\project.assets.json Restore completed in 577.05 ms for c:\Jenkins\workspace\Test\ConsoleApp1\ConsoleApp1.csproj. NuGet Config files used: c:\Jenkins\workspace\Test\NuGet.Config C:\Windows\system32\config\systemprofile\AppData\Roaming\NuGet\NuGet.Config Feeds used: http://devbuild/NuGetHost/nuget https://api.nuget.org/v3/index.json Done Building Project "c:\Jenkins\workspace\Test\ConsoleApp1.sln" (Restore target(s)). But when msbuild comes to compile the code we get the following errors which looks like the NuGet hasn't been downloaded: CSC : error CS0006: Metadata file 'C:\Windows\system32\config\systemprofile\.nuget\packages\log4net\2.0.8\lib\net45-full\log4net.dll' could not be found [c:\Jenkins\workspace\Test\ConsoleApp1\ConsoleApp1.csproj] Any idea why the nuget packages aren't getting restored?
After many hours of searching and sifting through NuGet issue posts and filtering out the .net core noise, I have a fix! According to some NuGet and msbuild msbuild issues raised, when restoring with NuGet (or msbuild /restore) under the local system account in Windows Server 2012, the folder NuGet uses isn't accessible or it's a different folder due to 32 bit vs 64 bit process that is running so it can't download nugets to that local cache folder. This folder that msbuild wants to look in at compile time seems to be C:\Windows\system32\config\systemprofile\.nuget\packages. The solve for us was to set the NuGet package cache folder using the System wide environment variable NUGET_PACKAGES to a different, accessible folder such as C:\NugetPackageCache eg NUGET_PACKAGES=C:\NugetPackageCache You can also set this per Jenkins project by setting the Build Environment->Inject environment variables to the build process->Properties Content to: NUGET_PACKAGES=C:/NugetPackageCache Another potential solve according to this NuGet issue post is to set the environment variable to the folder that msbuild is looking for the nugets ie NUGET_PACKAGES=C:\Windows\system32\config\systemprofile\.nuget\packages Note: The environment variables take precedence with NuGet. It doesn't look like they've updated the NuGet docs just yet to mention the precedence. Note: To inject/set the environment variables we are using the EnvInject Jenkins plugin which looks like this:
Jenkins
48,896,486
49
We have a project in a Github repository with multiple Jenkinsfiles: my-project app Jenkinsfile lib1 Jenkinsfile lib2 Jenkinsfile We have created 3 Jenkins pipelines each referring to a Jenkinsfile. Question: How to avoid triggering "app" and "lib1" pipelines when there is a new commit in "lib2"? We don't want to run N jobs every time a commit happens. I've seen that the issue is addressed in https://issues.jenkins-ci.org/browse/JENKINS-43749, but I haven't found a solution there.
RECENT UPDATE: I later fixed this issue using following code snippet: If you see the command dir('servicelayer'), using this to move into the directory, executing git command to find the difference between the commits and raising a flag. This way i have managed 3 Jenkins files in a single repository. stage('Validation') { steps { //Moving in to the directory to execute the commands dir('servicelayer') { script { //Using the git command to check the difference between previous successful commit. ${GIT_PREVIOUS_SUCCESSFUL_COMMIT} is an environment variable comes with GIT Jenkins plugin //There is a drawback though, if it is the first time you are running this job, this variable is not available and fails the build //For the first time i had to use ${env.GIT_COMMIT} itself at both places to pass the build. A hack but worth it for future builds. def strCount = sh(returnStdout: true, script: "git diff --name-only ${env.GIT_COMMIT} ${env.GIT_PREVIOUS_SUCCESSFUL_COMMIT} | grep servicelayer | wc -l").trim() if(strCount=="0") { echo "Skipping build no files updated" CONTINUE_BUILD = false } else { echo "Changes found in the servicelayer module" } } } } } OLD ANSWER: You can do it in 2 ways: a) Configure your build jobs by adding "Additional Behaviour" -> "Polling ignore commits in certain region" This behaviour will let you add "Whitelisted regions" or "Blacklist the regions" which you do not want to poll for triggering the build job. b) Write a custom shell script to verify the files changed per commit and verify the location. This shell script can then be added to your Jenkinsfile and be it Declarative or Scripted you can tweak the behaviour accordingly. I will recommend option a) as it is simpler to configure and maintain as well. Hope this helps.
Jenkins
49,448,029
49
Is there a way to set the agent label dynamically and not as plain string? The job has 2 stages: First stage - Runs on a "master" agent, always. At the end of this stage I will know on which agent should the 2nd stage run. Second stage - should run on the agent decided in the first stage. My (not working) attempt looks like this: pipeline { agent { label 'master' } stages { stage('Stage1') { steps { script { env.node_name = "my_node_label" } echo "node_name: ${env.node_name}" } } stage('Stage2') { agent { label "${env.node_name}" } steps { echo "node_name: ${env.node_name}" } } } } The first echo works fine and "my_node_label" is printed. The second stage fails to run on an agent labeled "my_node_label" and the console prints: There are no nodes with the label ‘null’ Maybe it can help - if I just put "${env}" in the label field I can see that this is a java class as it prints: There are no nodes with the label ‘org.jenkinsci.plugins.workflow.cps.EnvActionImpl@79c0ce06’
Here is how I made it: mix scripted and declarative pipeline. First I've used scripted syntax to find, for example, the branch I'm on. Then define AGENT_LABEL variable. This var can be used anywhere along the declarative pipeline def AGENT_LABEL = null node('master') { stage('Checkout and set agent'){ checkout scm ### Or just use any other approach to figure out agent label: read file, etc if (env.BRANCH_NAME == 'master') { AGENT_LABEL = "prod" } else { AGENT_LABEL = "dev" } } } pipeline { agent { label "${AGENT_LABEL}" } stages { stage('Normal build') { steps { echo "Running in ${AGENT_LABEL}" sh "hostname" } } stage ("Docker build") { agent{ dockerfile { dir 'Dockerfiles' label "${AGENT_LABEL}" } } steps{ sh "hostname" } } } }
Jenkins
46,630,168
49
Dear Stackoverflow Community, I am trying to setup a Jenkins CI pipeline using docker images as containers for my build processes. I am defining a Jenkinsfile to have a build pipeline as code. I am doing something like this: node { docker.withRegistry('http://my.registry.com', 'docker-credentials') { def buildimage = docker.image('buildimage:latest'); buildimage.pull(); buildimage.inside("") { stage('Checkout sources') { git url: '...', credentialsId: '...' } stage('Run Build and Publish') { sh "..." } } } } Unfortunately I am stumbling upon a weird behavior of the Docker pipeline plugin. In the build output I can see that the Image.inside(...) command triggers the container with a docker run -t -d -u 1000:1000 ... This makes my build fail, because the user defined in the Dockerfile does not have the UID 1000 ... another user is actually taken. I even tried specifying which user should be used within the Jenkinsfile node { docker.withRegistry('http://my.registry.com', 'docker-credentials') { def buildimage = docker.image('buildimage:latest'); buildimage.pull(); buildimage.inside("-u otheruser:othergroup") { stage('Checkout sources') { git url: '...', credentialsId: '...' } stage('Run Build and Publish') { sh "..." } } } } but this leads to a duplicate -u switch in the resulting docker run command docker run -t -d -u 1000:1000 -u otheruser:othergroup ... and obviously only the first -u is applied because my build still fails. I also did debugging using whoami to validate my assumptions. So my questions: how can I change this behavior? Is there a switch where I can turn the -u 1000:1000 off? Is this even a bug? I actually like to work with the Docker plugin because it simplifies the usage of an own docker registry with credentials maintained in Jenkins. However, is there another simple way to get to my goal if the Docker Plugin is not usable? Thank you in advance for your time
I found you can actually change user by adding args like following. Although -u 1000:1000 will still be there in the docker run, you will an additional -u [your user] after 1000:1000. Docker will acutally use latest -u parameter agent { docker { image 'your image' args '-u root --privileged' } }
Jenkins
42,630,894
49
I am using Jenkinsfile for scripting of a pipeline. Is there any way to disable printing of executed shell commands in build logs? Here is just a simple example of a jenkins pipeline: node{ stage ("Example") { sh('echo shellscript.sh arg1 arg2') sh('echo shellscript.sh arg3 arg4') } } which produces the following output in console log: [Pipeline] node Running on master in /var/lib/jenkins/workspace/testpipeline [Pipeline] { [Pipeline] stage [Pipeline] { (Test) [Pipeline] sh [testpipeline] Running shell script + echo shellscript.sh arg1 arg2 shellscript.sh arg1 arg2 [Pipeline] sh [testpipeline] Running shell script + echo shellscript.sh arg3 arg4 shellscript.sh arg3 arg4 [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline Finished: SUCCESS Basically I would like to disable printing of the commands itself. + echo shellscript.sh arg1 arg2 + echo shellscript.sh arg3 arg4
By default Jenkins starts shell scripts with flags -xe. -x enables additional logging. -e makes the script exit if any command inside returns non-zero exit status. To reset a flag I'd suggest two options: Call set +x in the body of your script. sh 'set +x' Pass custom shebang line without -x: sh('#!/bin/sh -e\n' + 'echo shellscript.sh arg1 arg2') As for the second option you can define a wrapper function for convenience which will prepend the script with custom shebang and then call sh def mysh(cmd) { sh('#!/bin/sh -e\n' + cmd) }
Jenkins
39,891,926
49
I am getting the error "Test reports were found but none of them are new. Did tests run?" when trying to send unit test results by email. The reason is that I have a dedicated Jenkins job that imports the artifacts from a test job to itself, and sends the test results by email. The reason why I am doing this is because I don't want Jenkins to send all the developers email during the night :) so I am "post-poning" the email sending since Jenkins itself does not support delayed email notifications (sadly). However, by the time the "send test results by email" job executes, the tests are hours old and I get the error as specified in the question title. Any ideas on how to get around this problem?
You could try updating the timestamps of the test reports as a build step ("Execute shell script"). E.g. cd path/to/test/reports touch *.xml
Jenkins
13,879,667
49
I have been using PHP_CodeSniffer with jenkins, my build.xml was configured for phpcs as below <target name="phpcs"> <exec executable="phpcs"> <arg line="--report=checkstyle --report-file=${basedir}/build/logs/checkstyle.xml --standard=Zend ${source}"/> </exec> </target> And I would like to ignore the following warning FOUND 0 ERROR(S) AND 1 WARNING(S) AFFECTING 1 LINE(S) -------------------------------------------------------------------------------- 117 | WARNING | Line exceeds 80 characters; contains 85 characters -------------------------------------------------------------------------------- How could I ignore the line length warning?
You could create your own standard. The Zend one is quite simple (this is at /usr/share/php/PHP/CodeSniffer/Standards/Zend/ruleset.xml in my Debian install after installing it with PEAR). Create another one based on it, but ignore the line-length bit: <?xml version="1.0"?> <ruleset name="Custom"> <description>Zend, but without linelength check.</description> <rule ref="Zend"> <exclude name="Generic.Files.LineLength"/> </rule> </ruleset> And set --standard=/path/to/your/ruleset.xml. Optionally, if you just want to up the char count before this is triggered, redefine the rule: <!-- Lines can be N chars long (warnings), errors at M chars --> <rule ref="Generic.Files.LineLength"> <properties> <property name="lineLimit" value="N"/> <property name="absoluteLineLimit" value="M"/> </properties> </rule>
Jenkins
9,280,716
49
How can I get build time stamp of the latest build from Jenkins? I want to insert this value in the Email subject in post build actions.
Build Timestamp Plugin will be the Best Answer to get the TIMESTAMPS in the Build process. Follow the below Simple steps to get the "BUILD_TIMESTAMP" variable enabled. STEP 1: Manage Jenkins > Plugin Manager > Available plugins (or Installed plugins)... Search for "Build Timestamp". Install with or without Restart. STEP 2: Manage Jenkins > Configure System. Search for 'Build Timestamp' section, then Enable the CHECKBOX. Select the TIMEZONE, TIME format you want to setup with..Save the Page. USAGE: When Configuring the Build with ANT or MAVEN, Please declare a Global variable as, E.G. btime=${BUILD_TIMESTAMP} (use this in your Properties box in ANT or MAVEN Build Section) use 'btime' in your Code to any String Variables etc..
Jenkins
24,226,862
48
I have tried all sort of ways but nothing seems to be working. Here is my jenkinsfile. def ZIP_NODE def CODE_VERSION pipeline{ /*A declarative pipeline*/ agent { /*Agent section*/ // where would you like to run the code label 'ubuntu' } options{ timestamps() } parameters { choice(choices: ['dev'], description: 'Name of the environment', name: 'ENV') choice(choices: ['us-east-1', 'us-west-1','us-west-2','us-east-2','ap-south-1'], description: 'What AWS region?', name: 'AWS_DEFAULT_REGION') string(defaultValue: "", description: '', name: 'APP_VERSION') } stages{ /*stages section*/ stage('Initialize the variables') { // Each stage is made up of steps steps{ script{ CODE_VERSION='${BUILD_NUMBER}-${ENV}' ZIP_NODE='abcdefgh-0.0.${CODE_VERSION}.zip' } } } stage ('code - Checkout') { steps{ checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'xxxxxxxxxxxxxxxxxxxxxxxxxx', url: 'http://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.git']]]) } } stage ('code - Build'){ steps{ sh ''' echo ${JOB_NAME} pwd echo ${ZIP_NODE} echo 'remove alraedy existing zip files' rm -rf *.zip zip -r ${ZIP_NODE} . chmod 777 $ZIP_NODE ''' } } stage('Deploy on Beanstalk'){ steps{ build job: 'abcdefgh-PHASER' , parameters: [[$class: 'StringParameterValue', name: 'vpc', value: ENV], [$class: 'StringParameterValue', name: 'ZIP_NODE', value: ZIP_NODE], [$class: 'StringParameterValue', name: 'CODE_VERSION', value: CODE_VERSION], [$class: 'StringParameterValue', name: 'APP_VERSION', value: BUILD_NUMBER], [$class: 'StringParameterValue', name: 'AWS_DEFAULT_REGION', value: AWS_DEFAULT_REGION], [$class: 'StringParameterValue', name: 'ParentJobName', value: JOB_NAME]] } } } } The output of step script in stage ('Initialize the variables') gives me nothing, It is not setting the value of global variable ZIP_NODE: [Pipeline] stage [Pipeline] { (Initialize the variables) [Pipeline] script [Pipeline] { [Pipeline] } [Pipeline] // script [Pipeline] } [Pipeline] // stage And then we go to stage (code - Build) we do not get the value of ZIP_NODE. See echo statement at 22:34:17 [Pipeline] stage [Pipeline] { (code - Build) [Pipeline] sh 22:34:16 [abcdefgh-ci-dev-pipeline] Running shell script 22:34:17 + echo abcdefgh-ci-dev-pipeline 22:34:17 abcdefgh-ci-dev-pipeline 22:34:17 + pwd 22:34:17 /home/advisor/Jenkins/workspace/abcdefgh-ci-dev-pipeline 22:34:17 + echo 22:34:17 22:34:17 + echo remove alraedy existing zip files Thanks to @awefsome, I had some observation which I would like add in details: When I use below code I get desired output i.e. correct value of ZIP_NODE: stage ('code - Build'){ steps{ sh "echo ${JOB_NAME} && pwd && echo ${ZIP_NODE} && echo 'remove alraedy existing zip files' && rm -rf *.zip && zip -r ${ZIP_NODE} . && chmod 777 $ZIP_NODE" } } But when I use below code I do not get the value of ZIP_NODE: stage ('code - Build'){ steps{ sh ''' echo ${ZIP_NODE} echo ${JOB_NAME} pwd echo ${ZIP_NODE} echo ${CODE_VERSION} #rm -rf .ebextensions echo 'remove alraedy existing zip files' rm -rf *.zip zip -r ${ZIP_NODE} . chmod 777 $ZIP_NODE ''' } }
sh ''' ''' should be sh """ """ with single quotes the variables don't get processed.
Jenkins
52,063,864
48
I'm trying to create my first Groovy script for Jenkins: After looking here https://jenkins.io/doc/book/pipeline/, I created this: node { stages { stage('HelloWorld') { echo 'Hello World' } stage('git clone') { git clone "ssh://[email protected]/myrepo.git" } } } However, I'm getting: java.lang.NoSuchMethodError: No such DSL method "stages" found among steps What am I missing? Also, how can I pass my credentials to the Git Repository without writing the password in plain text?
You are confusing and mixing Scripted Pipeline with Declarative Pipeline, for complete difference see here. But the short story: declarative pipelines is a new extension of the pipeline DSL (it is basically a pipeline script with only one step, a pipeline step with arguments (called directives), these directives should follow a specific syntax. The point of this new format is that it is more strict and therefor should be easier for those new to pipelines, allow for graphical editing and much more. scripted pipelines is the fallback for advanced requirements. So, if we look at your script, you first open a node step, which is from scripted pipelines. Then you use stages which is one of the directives of the pipeline step defined in declarative pipeline. So you can for example write: pipeline { ... stages { stage('HelloWorld') { steps { echo 'Hello World' } } stage('git clone') { steps { git clone "ssh://[email protected]/myrepo.git" } } } } So if you want to use declarative pipeline that is the way to go. If you want to scripted pipeline, then you write: node { stage('HelloWorld') { echo 'Hello World' } stage('git clone') { git clone "ssh://[email protected]/myrepo.git" } } E.g.: skip the stages block.
Jenkins
42,113,655
48
I'd like to access git variables such as GIT_COMMIT and GIT_BRANCH when I have checked out a repository from git further down in the build stream. Currently I find no available variable to access these two parameters. node { git git+ssh://git.com/myproject.git echo "$GIT_COMMIT - $BRANCH_NAME" } Is such variables available and in case, where would I find them. I don't mind if they are available through some groovy variables or wherever, just that I can access them. Maybe I lack the debugging skills in Groovy and this is easy to find, but I just can't find it with my limited skills.
Depending on the SCM plugin you are using, the checkout step may return additional information about the revision. This was resolved in JENKINS-26100. It was released in the 2.6 version of the workflow-scm-step plugin. For example, using the Git plugin, you can do something like: final scmVars = checkout(scm) echo "scmVars: ${scmVars}" echo "scmVars.GIT_COMMIT: ${scmVars.GIT_COMMIT}" echo "scmVars.GIT_BRANCH: ${scmVars.GIT_BRANCH}" This will vary depending on the plugin you use, so the original answer may work better for you. Original Answer With the 2.4 release of the Pipeline Nodes and Processes Plugin, you can simply do: def gitCommit = sh(returnStdout: true, script: 'git rev-parse HEAD').trim()
Jenkins
35,554,983
48
Recently I'm looking at Ansible and want to use it in projects. And also there's another tool Rundeck can be used to do all kinds of Operations works. I have experience with neither tool and this is my current understanding about them: Similar points Both tools are agent-less and use SSH to execute commands on remote servers Rundeck's main concept is Node, the same as Ansible's inventory, the key idea is to define/manage/group the target servers Rundeck can execute ad-hoc commands on selected nodes, Ansible can also do this very conveniently. Rundeck can define workflow and do the execution on selected nodes, this can be done with Ansible by writing playbook Rundeck can be integrated with CI tool like Jenkins to do deploy work, we can also define a Jenkins job to run ansible-playbook to do the deploy work Different points Rundeck has the concept of Job, which Ansible does not Rundeck has Job Scheduler, which Ansible can only achieve this with other tools like Jenkins or Cron tasks Rundeck has Web UI by default for free, but you have to pay for Ansible Tower Seems both Ansible and Rundeck can be used to do configuration/management/deployment work, maybe in a different way. So my questions are: Are these two complementary tools or they are designed for different purposes? If they're complementary tools, why is Ansibl only compared to tools like Chef/Puppet/Slat but not with Rundeck? If they're not why they have so many similar functionalities? We're already using Jenkins for CI, to build a Continuous-Delivery pipeline, which tool(Ansible/Rundeck) is a better idea to use to do the deployment? If they can be used together, what's the best practice? Any suggestions and experience sharing are greatly appreciated.
TL;DR - given your environment of Jenkins for CI/CD I'd recommend using just Ansible. You've spotted that there is sizeable cross-over between Ansible & Rundeck, so it's probably best to concentrate on where each product focuses, it's style and use. Focus I believe Rundeck's focus is in enabling sysadmins to build a (web-based) self-service portal that's accessible to both other sysadmins and, potentially, less "technical"/sysadmin people. Rundeck's website says "Turn your operations procedures into self-service jobs. Safely give others the control and visibility they need.". Rundeck also feels like it has a more 'centralised' view on the world: you load the jobs into a database and that's where they live. To me, Ansible is for devops - so building out and automating deployments of (self-built) applications in a way such that they are highly-repeatable. I'd argue that Ansible comes more focussed for software development houses that build their own products: Ansible 'playbooks' are text files, so normally stored into source control and normally alongside the app that the playbooks will deploy. Job creation focus With Rundeck you typically create jobs via the web UI. With Ansible you create tasks/playbooks in files via a text editor. Operation/Task/Job Style Rundeck by default is imperative - you write scripts that are executed (via SSH). Ansible is both imperative (i.e. execute bash statements) but also declarative, so in some cases, say, starting Apache you can use the service task to make sure that it's running. This is closer to other configuration management tools like Puppet and Chef. Complex jobs / scripts Rundeck has the ability to run another job by defining a step in the Job's workflow but from experience this feels like a tacked-on addition than a serious top-level feature. Ansible is designed to create complex operations; running/including/etc are top-level features. How it runs Rundeck is a server app. If you want to run jobs from somewhere else (like CI) you'll either need to call out to the cli or make an API call. Straight Ansible is command-line. Proviso Due to the cross-over and overall flexibility of Rundeck and Ansible you could achieve all of the above in each. You can achieve version control of your Rundeck jobs by exporting them to YAML or XML and checking them into source control. You can get a web UI in Ansible using Tower. etc. etc. etc. Your questions: Complementary tools? I could envision a SaaS shop using both: one might use Ansible to perform all deployment actions and then use Rundeck to perform one-off, adhoc jobs. However, while I could envision it I wouldn't recommend that as a starting point. Me, I'd start with just Ansible and see how far I get. I'd only layer in Rundeck later on if I discovered that I really, really need to run one-offs. CI/CD Ansible: your environment sounds more like a software house where you're deploying your own app. It should probably be repeatable (especially as you're going Continuous Delivery) so you'll want your deploy scripts in source control. You'll want simplicity and Ansible is "just text files". I hope you will also want your devs to be able to run things on their machines (right?), Ansible is decentralised. Used together (for CI/CD) Calling Rundeck from Ansible, no. Sure, it would be possible but I'm struggling to come up with good reasons. At least, not very specialised specific-to-a-particular-app-or-framework reasons. Calling Ansible from Rundeck, yes. I could envision someone first building out some repeatable adhoc commands in Ansible. Then I could see there being a little demand for being able to call this without a command line (say: non technical users). But, again, this is getting specific to your environment.
Jenkins
31,152,102
48
I'm trying to improve Hudson CI for iOS and start Hudson as soon as system starts up. To do this I'm using the following launchd script: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>Hudson CI</string> <key>ProgramArguments</key> <array> <string>/usr/bin/java</string> <string>-jar</string> <string>/Users/user/Hudson/hudson.war</string> </array> <key>RunAtLoad</key> <true/> <key>UserName</key> <string>user</string> </dict> </plist> This works OK but when xcodebuild, which is started by Hudson, tries to sign an app it fails because it cant find the proper key/certificate in the keychain. However key/certificate pair is there since it's working correct if I start Hudson from command line. Do you have any ideas why it happens?
I have found a solution giving me access to the regular keychains for my Jenkins user. Find this plist: /Library/LaunchDaemons/org.jenkins-ci.plist then: Add the UserName element with a value of jenkins. Add a SessionCreate element with a value true to the plist file. This gives access to the normal keychains for the user you specified in UserName Example: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>EnvironmentVariables</key> <dict> <key>JENKINS_HOME</key> <string>/Users/Shared/Jenkins/Home</string> </dict> <key>GroupName</key> <string>wheel</string> <key>KeepAlive</key> <true/> <key>Label</key> <string>org.jenkins-ci</string> <key>ProgramArguments</key> <array> <string>/bin/bash</string> <string>/Library/Application Support/Jenkins/jenkins-runner.sh</string> </array> <key>RunAtLoad</key> <true/> <key>UserName</key> <string>jenkins</string> <key>SessionCreate</key> <true/> </dict> </plist> Then restart the daemon and try running a job in Jenkins that calls security list-keychains. You should no longer see System.keychain as the only entry but the regular login and any custom key chains you might have added to the list of keychains for the "jenkins" user. With the above setup I am able to use codesigning certificates from a custom keychain on my Jenkins build server. I don't have to install any certificates or keys in my System keychain.
Jenkins
6,827,874
48
As far as declarative pipelines go in Jenkins, I'm having trouble with the when keyword. I keep getting the error No such DSL method 'when' found among steps. I'm sort of new to Jenkins 2 declarative pipelines and don't think I am mixing up scripted pipelines with declarative ones. The goal of this pipeline is to run mvn deploy after a successful Sonar run and send out mail notifications of a failure or success. I only want the artifacts to be deployed when on master or a release branch. The part I'm having difficulties with is in the post section. The Notifications stage is working great. Note that I got this to work without the when clause, but really need it or an equivalent. pipeline { agent any tools { maven 'M3' jdk 'JDK8' } stages { stage('Notifications') { steps { sh 'mkdir tmpPom' sh 'mv pom.xml tmpPom/pom.xml' checkout([$class: 'GitSCM', branches: [[name: 'origin/master']], doGenerateSubmoduleConfigurations: false, submoduleCfg: [], userRemoteConfigs: [[url: 'https://repository.git']]]) sh 'mvn clean test' sh 'rm pom.xml' sh 'mv tmpPom/pom.xml ../pom.xml' } } } post { success { script { currentBuild.result = 'SUCCESS' } when { branch 'master|release/*' } steps { sh 'mvn deploy' } sendNotification(recipients, null, 'https://link.to.sonar', currentBuild.result, ) } failure { script { currentBuild.result = 'FAILURE' } sendNotification(recipients, null, 'https://link.to.sonar', currentBuild.result ) } } }
In the documentation of declarative pipelines, it's mentioned that you can't use when in the post block. when is allowed only inside a stage directive. So what you can do is test the conditions using an if in a script: post { success { script { if (env.BRANCH_NAME == 'master') currentBuild.result = 'SUCCESS' } } // failure block }
Jenkins
49,798,549
47
I've been creating a few Multibranch Pipeline projects in Jenkins and now I've "upgraded" to use a GitHub Organization project. How do I disable the old Multibranch Pipeline projects? I don't see any Disable button anywhere. Here is a screenshot of what I mean: Since I can't add a screenshot to a reply, I'm editing my question to include the screenshot to show I have the latest version of the Pipeline Plugin installed, 2.16:
If you are using a recent version of the Pipeline Job plugin (I am using version 2.25 from Sep 5, 2018) and you do not see the disable option, then you can still disable the job by appending /disable to the URL of the job. Source: You would need to be logged in as a user who has access to write/configure builds. And if the build is Pipeline Multibranch you still won't see the disable button. If that's the case, you can append /disable to the project URL to disable it. https://issues.jenkins-ci.org/browse/JENKINS-27299?focusedCommentId=336904&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-336904
Jenkins
47,840,096
47
Is it possible to Scan a Multibranch Pipeline to detect the branches with a Jenkinsfile, but without the pipeline execution? My projects have different branches and I don't want that all the children pipelines branches with a Jenkinsfile to start to execute when I launch a build scan from the parent pipeline multibranch.
In your Branch Sources section you can add a Property named Suppress automatic SCM triggering. This prevents Jenkins from building everything with an Jenkinsfile.
Jenkins
44,004,636
47
I installed jenkins by downloading jenkins-2.2.pkg. After the installation is complete, Chrome auto-connected to http://localhost:8080/login?from=%2F and I see the following message: Unlock Jenkins To ensure Jenkins is securely set up by the administrator, a password has been written to the log (not sure where to find it?) and this file on the server: /Users/Shared/Jenkins/Home/secrets/initialAdminPassword Please copy the password from either location and paste it below. But I don't have access to secrets folder on my Mac book even when I'm the Admin user. Please help me on how to find the initial admin password?
Navigate to this folder /Users/Shared/Jenkins/Home Right click on secrets/ folder and select "Get Info" Scroll down to the right bottom corner of the pop up window and click on the lock image > enter your password > ok Click on the "+" at the left bottom corner of the pop up window and add the user 4.5 Click on Settings icon - left bottom and Apply changes. Open the "secrets" folder and find the initialAdminPassword file to get the initial admin password. If you don't have permission to the file, you need to right click on the file and select "Get Info" then repeat step 3 and 4 above to access the file.
Jenkins
37,146,063
47
I want to add a Build step with the Groovy plugin to read a file and trigger a build fail depending on the content of the file. How can I inject the workspace file path in the groovy plugin ? myFileDirectory = // Get workspace filepath here ??? myFileName = "output.log" myFile = new File(myFileDirectory + myFileName) lastLine = myFile.readLines().get(myFile.readLines().size().toInteger() - 1) if (lastLine ==~ /.Fatal Error.*/ ){ println "Fatal error found" System.exit(1) } else{ println "nothing to see here" }
I realize this question was about creating a plugin, but since the new Jenkins 2 Pipeline builds use Groovy, I found myself here while trying to figure out how to read a file from a workspace in a Pipeline build. So maybe I can help someone like me out in the future. Turns out it's very easy, there is a readfile step, and I should have rtfm: env.WORKSPACE = pwd() def version = readFile "${env.WORKSPACE}/version.txt"
Jenkins
22,917,491
47
I have been trying to follow tutorials and this one: Deploy as Jenkins User or Allow Jenkins To Run As Different User? but I still can't for the love of the computing gods, run as a different user. Here are the steps of what I did: download the macosx pkg for jenkins(LTS) setup plugins etc and git try to build it I keep getting a can't clone error because jenkins keeps starting as anonymous: Started by user anonymous How do I set it up so that jenkins runs as me? I was using the jenkins web UI so it was in localhost:8080 I tried logging in also using /login but I can't even login using my name or as root. The people tab doesn't even have a create user link, so yeah I've been stuck. Help please?
The "Issue 2" answer given by @Sagar works for the majority of git servers such as gitorious. However, there will be a name clash in a system like gitolite where the public ssh keys are checked in as files named with the username, ie keydir/jenkins.pub. What if there are multiple jenkins servers that need to access the same gitolite server? (Note: this is about running the Jenkins daemon not running a build job as a user (addressed by @Sagar's "Issue 1").) So in this case you do need to run the Jenkins daemon as a different user. There are two steps: Step 1 The main thing is to update the JENKINS_USER environment variable. Here's a patch showing how to change the user to ptran. BEGIN PATCH --- etc/default/jenkins.old 2011-10-28 17:46:54.410305099 -0700 +++ etc/default/jenkins 2011-10-28 17:47:01.670369300 -0700 @@ -13,7 +13,7 @@ PIDFILE=/var/run/jenkins/jenkins.pid # user id to be invoked as (otherwise will run as root; not wise!) -JENKINS_USER=jenkins +JENKINS_USER=ptran # location of the jenkins war file JENKINS_WAR=/usr/share/jenkins/jenkins.war --- etc/init.d/jenkins.old 2011-10-28 17:47:20.878539172 -0700 +++ etc/init.d/jenkins 2011-10-28 17:47:47.510774714 -0700 @@ -23,7 +23,7 @@ #DAEMON=$JENKINS_SH DAEMON=/usr/bin/daemon -DAEMON_ARGS="--name=$NAME --inherit --env=JENKINS_HOME=$JENKINS_HOME --output=$JENKINS_LOG - -pidfile=$PIDFILE" +DAEMON_ARGS="--name=$JENKINS_USER --inherit --env=JENKINS_HOME=$JENKINS_HOME --output=$JENKINS_LOG --pidfile=$PIDFILE" SU=/bin/su END PATCH Step 2 Update ownership of jenkins directories: chown -R ptran /var/log/jenkins chown -R ptran /var/lib/jenkins chown -R ptran /var/run/jenkins chown -R ptran /var/cache/jenkins Step 3 Restart jenkins sudo service jenkins restart
Jenkins
6,692,330
47
I am trying to run a simple pipeline-script in Jenkins with 2 stages. The script itself creates a textFile and checks if this one exists. But when i try to run the job I get an "Expected a step" error. I have read somewhere that you cant have an if inside a step so that might be the problem but if so how can I check without using the if? pipeline { agent {label 'Test'} stages { stage('Write') { steps { writeFile file: 'NewFile.txt', text: '''Sample HEADLINE''' println "New File created..." } } stage('Check') { steps { Boolean bool = fileExists 'NewFile.txt' if(bool) { println "The File exists :)" } else { println "The File does not exist :(" } } } } } I expect the script to create a "NewFile.txt" in the agents workspace and print a text to the console confirming that it exists. But I actually get two "Expected a step" errors. At the Line starting with Boolean bool = ... and at if(bool) ...
You are missing a script{} -step which is required in a declarative pipeline. Quote: The script step takes a block of Scripted Pipeline and executes that in the Declarative Pipeline. stage('Check') { steps { script { Boolean bool = fileExists 'NewFile.txt' if (bool) { println "The File exists :)" } else { println "The File does not exist :(" } } } }
Jenkins
55,508,871
46
Anybody knows how to remove the users from the Credentials drop down in Jenkins for a project under Source Code Management -> Git Repositories Referring to the section highlighted in yellow in attached screen shot: I seem to have added a few users in error and want to remove them from the drop down. I dont see any option to delete them.
Ok i found it, just had to look around. It was under the Jenkins Home page -> Credentials. It is not present under the Credentials section of the Configuration page. I thought since it was GIT based, it was storing users under that configuration.
Jenkins
34,721,686
46
I am trying to install jenkins in ubuntu. I have followed the commands below: wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | apt-key add - echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list then apt-get update and apt-get install jenkins but It shows Starting Jenkins Continuous Integration Server Jenkins The selected http port (8080) seems to be in use by another program Please select another port to use for jenkins Need help on how to set a different port for Jenkins to run.
First open the /etc/default/jenkins file. Then under JENKINS_ARGS section, you can change the port like this HTTP_PORT=9999. Then you should restart Jenkins with sudo service jenkins restart. Then to check the status use this command sudo systemctl status jenkins
Jenkins
28,340,877
46
I am working with Jenkins CI and am trying to properly configure my jobs to use git. I have the git plugin installed and configured for one of my jobs. When I build the job, I expect it to pull the latest changes for the branch I specify and then continue with the rest of the build process (e.g., unit tests, etc.). When I look at the console output, I see > git fetch --tags --progress ssh://gerrit@git-dev/Util +refs/heads/*:refs/remotes/origin/* > git rev-parse origin/some_branch^{commit} Checking out Revision <latest_SHA1> (origin/some_branch) > git config core.sparsecheckout > git checkout -f <latest_SHA1> > git rev-list <latest_SHA1> I see that the plugin fetches and checks out the proper commit hash, but when the tests run it seems as though the repo wasn't updated at all. If I go into the repository in Jenkins, I see there that the latest changes were never pulled. Shouldn't it pull before it tries to build? I have git 1.8.5 installed on my Jenkins machine, which is a recommended version. https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin After checking other similar sounding questions on SO, their answers weren't helpful for my problem.
Relates me to scenario where workspace wasn't getting cleaned-up, used: Source Code Management--> Additional Behaviours --> Clean after checkout Other option is to use Workspace Cleanup Plugin
Jenkins
25,774,895
46
It would be nice for our Jenkins CI server to automatically detect, deploy and build tags as they are created in our Github repository. Is this possible?
With the following configuration, you can make a job build all tags: Make the job fetch tags as if they were branches: Click on the Advanced button below the repository URL and enter the Refspec +refs/tags/*:refs/remotes/origin/tags/* Have it build all tag "branches" with the Branch Specifier */tags/* Enable SCM polling, so that the job detects new tags. This approach has one drawback: The job will build all tags and not just newly added tags. So after you have created the job, it will be triggered once for every existing tag. So you probably want to have the job do nothing at first, then wait until all existing tags have been processed, and only then configure the build steps you want to be done for every new tag. Since tags don't change in git, the job will then only be triggered once for every new tag.
Jenkins
7,805,603
46