question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I know this (or similar) has been asked many times but having tried out numerous possibilities I've not been able to find a a regex that works 100%. I've got a CSV file and I'm trying to split it into an array, but encountering two problems: quoted commas and empty elements. The CSV looks like: 123,2.99,AMO024,Title,"Description, more info",,123987564 The regex I've tried to use is: thisLine.split(/,(?=(?:[^\"]*\"[^\"]*\")*(?![^\"]*\"))/) The only problem is that in my output array the 5th element comes out as 123987564 and not an empty string.
Description Instead of using a split, I think it would be easier to simply execute a match and process all the found matches. This expression will: divide your sample text on the comma delimits will process empty values will ignore double quoted commas, providing double quotes are not nested trims the delimiting comma from the returned value trims surrounding quotes from the returned value if the string starts with a comma, then the first capture group will return a null value Regex: (?:^|,)(?=[^"]|(")?)"?((?(1)[^"]*|[^,"]*))"?(?=,|$) Example Sample Text 123,2.99,AMO024,Title,"Description, more info",,123987564 ASP example using the non-java expression Set regEx = New RegExp regEx.Global = True regEx.IgnoreCase = True regEx.MultiLine = True sourcestring = "your source string" regEx.Pattern = "(?:^|,)(?=[^""]|("")?)""?((?(1)[^""]*|[^,""]*))""?(?=,|$)" Set Matches = regEx.Execute(sourcestring) For z = 0 to Matches.Count-1 results = results & "Matches(" & z & ") = " & chr(34) & Server.HTMLEncode(Matches(z)) & chr(34) & chr(13) For zz = 0 to Matches(z).SubMatches.Count-1 results = results & "Matches(" & z & ").SubMatches(" & zz & ") = " & chr(34) & Server.HTMLEncode(Matches(z).SubMatches(zz)) & chr(34) & chr(13) next results=Left(results,Len(results)-1) & chr(13) next Response.Write "<pre>" & results Matches using the non-java expression Group 0 gets the entire substring which includes the comma Group 1 gets the quote if it's used Group 2 gets the value not including the comma [0][0] = 123 [0][1] = [0][2] = 123 [1][0] = ,2.99 [1][1] = [1][2] = 2.99 [2][0] = ,AMO024 [2][1] = [2][2] = AMO024 [3][0] = ,Title [3][1] = [3][2] = Title [4][0] = ,"Description, more info" [4][1] = " [4][2] = Description, more info [5][0] = , [5][1] = [5][2] = [6][0] = ,123987564 [6][1] = [6][2] = 123987564 Edited As Boris pointed out CSV format will escape a double quote " as a double double quote "". Although this requirement wasn't included by the OP, if your text includes double double quotes then you'll want to use a this modified expression: Regex: (?:^|,)(?=[^"]|(")?)"?((?(1)(?:[^"]|"")*|[^,"]*))"?(?=,|$) See also: https://regex101.com/r/y8Ayag/1 It should also be pointed out that Regex is a pattern matching tool not a parsing engine. Therefore if your text includes double double quotes it will still contain the double double quotes after pattern matching is completed. With this solution you'd still need to search for the double double quotes and replace them in your captured text.
Split
18,144,431
65
I have this text file made up of numbers and words, for example like this - 09807754 18 n 03 aristocrat 0 blue_blood 0 patrician and I want to split it so that each word or number will come up as a new line. A whitespace separator would be ideal as I would like the words with the dashes to stay connected. This is what I have so far: f = open('words.txt', 'r') for word in f: print(word) not really sure how to go from here, I would like this to be the output: 09807754 18 n 3 aristocrat ...
Given this file: $ cat words.txt line1 word1 word2 line2 word3 word4 line3 word5 word6 If you just want one word at a time (ignoring the meaning of spaces vs line breaks in the file): with open('words.txt','r') as f: for line in f: for word in line.split(): print(word) Prints: line1 word1 word2 line2 ... word6 Similarly, if you want to flatten the file into a single flat list of words in the file, you might do something like this: with open('words.txt') as f: flat_list=[word for line in f for word in line.split()] >>> flat_list ['line1', 'word1', 'word2', 'line2', 'word3', 'word4', 'line3', 'word5', 'word6'] Which can create the same output as the first example with print '\n'.join(flat_list)... Or, if you want a nested list of the words in each line of the file (for example, to create a matrix of rows and columns from a file): with open('words.txt') as f: matrix=[line.split() for line in f] >>> matrix [['line1', 'word1', 'word2'], ['line2', 'word3', 'word4'], ['line3', 'word5', 'word6']] If you want a regex solution, which would allow you to filter wordN vs lineN type words in the example file: import re with open("words.txt") as f: for line in f: for word in re.findall(r'\bword\d+', line): # wordN by wordN with no lineN Or, if you want that to be a line by line generator with a regex: with open("words.txt") as f: (word for line in f for word in re.findall(r'\w+', line))
Split
16,922,214
64
I need to split text before the second occurrence of the '-' character. What I have now is producing inconsistent results. I've tried various combinations of rsplit and read through and tried other solutions on SO, with no results. Sample file name to split: 'some-sample-filename-to-split' returned in data.filename. In this case, I would only like to have 'some-sample' returned. fname, extname = os.path.splitext(data.filename) file_label = fname.rsplit('/',1)[-1] file_label2 = file_label.rsplit('-',maxsplit=3) print(file_label2,'\n','---------------','\n')
You can do something like this: >>> a = "some-sample-filename-to-split" >>> "-".join(a.split("-", 2)[:2]) 'some-sample' a.split("-", 2) will split the string upto the second occurrence of -. a.split("-", 2)[:2] will give the first 2 elements in the list. Then simply join the first 2 elements. OR You could use regular expression : ^([\w]+-[\w]+) >>> import re >>> reg = r'^([\w]+-[\w]+)' >>> re.match(reg, a).group() 'some-sample' EDIT: As discussed in the comments, here is what you need: def hyphen_split(a): if a.count("-") == 1: return a.split("-")[0] return "-".join(a.split("-", 2)[:2]) >>> hyphen_split("some-sample-filename-to-split") 'some-sample' >>> hyphen_split("some-sample") 'some'
Split
36,300,158
63
I have a pandas dataframe with a column named 'City, State, Country'. I want to separate this column into three new columns, 'City, 'State' and 'Country'. 0 HUN 1 ESP 2 GBR 3 ESP 4 FRA 5 ID, USA 6 GA, USA 7 Hoboken, NJ, USA 8 NJ, USA 9 AUS Splitting the column into three columns is trivial enough: location_df = df['City, State, Country'].apply(lambda x: pd.Series(x.split(','))) However, this creates left-aligned data: 0 1 2 0 HUN NaN NaN 1 ESP NaN NaN 2 GBR NaN NaN 3 ESP NaN NaN 4 FRA NaN NaN 5 ID USA NaN 6 GA USA NaN 7 Hoboken NJ USA 8 NJ USA NaN 9 AUS NaN NaN How would one go about creating the new columns with the data right-aligned? Would I need to iterate through every row, count the number of commas and handle the contents individually?
I'd do something like the following: foo = lambda x: pd.Series([i for i in reversed(x.split(','))]) rev = df['City, State, Country'].apply(foo) print rev 0 1 2 0 HUN NaN NaN 1 ESP NaN NaN 2 GBR NaN NaN 3 ESP NaN NaN 4 FRA NaN NaN 5 USA ID NaN 6 USA GA NaN 7 USA NJ Hoboken 8 USA NJ NaN 9 AUS NaN NaN I think that gets you what you want but if you also want to pretty things up and get a City, State, Country column order, you could add the following: rev.rename(columns={0:'Country',1:'State',2:'City'},inplace=True) rev = rev[['City','State','Country']] print rev City State Country 0 NaN NaN HUN 1 NaN NaN ESP 2 NaN NaN GBR 3 NaN NaN ESP 4 NaN NaN FRA 5 NaN ID USA 6 NaN GA USA 7 Hoboken NJ USA 8 NaN NJ USA 9 NaN NaN AUS
Split
23,317,342
63
I want to split a string like "first middle last" with String.split(). But when i try to split it I get String[] array = {"first","","","","middle","","last"} I tried using String.isEmpty() to check for empty strings after I split them but I it doesn't work in android. Here is my code: String s = "First Middle Last"; String[] array = s.split(" "); for(int i=0; i<array.length; i++) { //displays segmented strings here } I think there is a way to split it like this: {"first","middle","last"} but can't figure out how. Thanks for the help!
Since the argument to split() is a regular expression, you can look for one or more spaces (" +") instead of just one space (" "). String[] array = s.split(" +");
Split
10,079,415
63
I'm trying to split a photo into multiple pieces using PIL. def crop(Path,input,height,width,i,k,x,y,page): im = Image.open(input) imgwidth = im.size[0] imgheight = im.size[1] for i in range(0,imgheight-height/2,height-2): print i for j in range(0,imgwidth-width/2,width-2): print j box = (j, i, j+width, i+height) a = im.crop(box) a.save(os.path.join(Path,"PNG","%s" % page,"IMG-%s.png" % k)) k +=1 but it doesn't seem to be working. It splits the photo but not in an exact way (you can try it).
Splitting image to tiles of MxN pixels (assuming im is numpy.ndarray): tiles = [im[x:x+M,y:y+N] for x in range(0,im.shape[0],M) for y in range(0,im.shape[1],N)] In the case you want to split the image to four pieces: M = im.shape[0]//2 N = im.shape[1]//2 tiles[0] holds the upper left tile
Split
5,953,373
63
Is it possible to reopen closed window in vim, that was in split? Something like ctrl+shift+t with browser tabs?
:vs# will split current window vertically and open the alternate file. It's so simple that you don't need to bind it to key.
Split
8,184,001
62
I want to split a text with comma , not space in for foo in list. Suppose I have a CSV file CSV_File with following text inside it: Hello,World,Questions,Answers,bash shell,script ... I used following code to split it into several words: for word in $(cat CSV_File | sed -n 1'p' | tr ',' '\n') do echo $word done It prints: Hello World Questions Answers bash shell script But I want it to split the text by commas not spaces: Hello World Questions Answers bash shell script How can I achieve this in bash?
Set IFS to ,: sorin@sorin:~$ IFS=',' ;for i in `echo "Hello,World,Questions,Answers,bash shell,script"`; do echo $i; done Hello World Questions Answers bash shell script sorin@sorin:~$
Split
7,718,307
62
We need to split a large live WMV video feed in small chunks all of the same size. We made a script that works fine doing this, except for one thing: the video chunks don't start with a key frame, so when playing most video chunks they don't display any image until a key frame from the original video is eventually reached. Isn't there a way to tell ffmpeg to make the output video to start with a key frame? Here is how our command lines look right now: ffmpeg.exe -i "C:\test.wmv" -ss 00:00:00 -t 00:00:05 -acodec copy -vcodec copy -async 1 -y "0000.wmv" ffmpeg.exe -i "C:\test.wmv" -ss 00:00:05 -t 00:00:05 -acodec copy -vcodec copy -async 1 -y "0001.wmv" and so on...
The latest builds of FFMPEG include a new option "segment" which does exactly what I think you need. ffmpeg -i INPUT.mp4 -acodec copy -f segment -vcodec copy \ -reset_timestamps 1 -map 0 OUTPUT%d.mp4 This produces a series of numbered output files which are split into segments based on Key Frames. In my own testing, it's worked well, although I haven't used it on anything longer than a few minutes and only in MP4 format.
Split
14,005,110
61
I already know how to use the diffopt variable to start diff mode with horizontal/vertical splits but not how to toggle between the two when I already have two files open for comparison. I tried the accepted answer found in this older post, but to no avail. The Ctrl+W commands didn't work for me. Perhaps because I'm running gVim in Windows-friendly mode?
The following command will change a vertical split into a horizontal split: ctrl+w then shift+j To change back to a vertical split use either: ctrl+w then shift+h or ctrl+w then shift+l For more information about moving windows: :h window-moving :h ctrl-w_J :h ctrl-w_K :h ctrl-w_H :h ctrl-w_L
Split
5,682,759
61
I need to write a procedure to normalize a record that have multiple tokens concatenated by one char. I need to obtain these tokens splitting the string and insert each one as a new record in a table. Does Oracle have something like a "split" function?
There is apex_util.string_to_table - see my answer to this question. Also, prior to the existence of the above function, I once posted a solution here on my blog. Update In later versions of APEX, apex_util.string_to_table is deprecated, and a similar function apex_string.split is preferred.
Split
3,710,589
61
I have a list of product codes in a text file, on each line is the product code that looks like: abcd2343 abw34324 abc3243-23A So it is letters followed by numbers and other characters. I want to split on the first occurrence of a number.
import re s='abcd2343 abw34324 abc3243-23A' re.split('(\d+)',s) > ['abcd', '2343', ' abw', '34324', ' abc', '3243', '-', '23', 'A'] Or, if you want to split on the first occurrence of a digit: re.findall('\d*\D+',s) > ['abcd', '2343 abw', '34324 abc', '3243-', '23A'] \d+ matches 1-or-more digits. \d*\D+ matches 0-or-more digits followed by 1-or-more non-digits. \d+|\D+ matches 1-or-more digits or 1-or-more non-digits. Consult the docs for more about Python's regex syntax. re.split(pat, s) will split the string s using pat as the delimiter. If pat begins and ends with parentheses (so as to be a "capturing group"), then re.split will return the substrings matched by pat as well. For instance, compare: re.split('\d+', s) > ['abcd', ' abw', ' abc', '-', 'A'] # <-- just the non-matching parts re.split('(\d+)', s) > ['abcd', '2343', ' abw', '34324', ' abc', '3243', '-', '23', 'A'] # <-- both the non-matching parts and the captured groups In contrast, re.findall(pat, s) returns only the parts of s that match pat: re.findall('\d+', s) > ['2343', '34324', '3243', '23'] Thus, if s ends with a digit, you could avoid ending with an empty string by using re.findall('\d+|\D+', s) instead of re.split('(\d+)', s): s='abcd2343 abw34324 abc3243-23A 123' re.split('(\d+)', s) > ['abcd', '2343', ' abw', '34324', ' abc', '3243', '-', '23', 'A ', '123', ''] re.findall('\d+|\D+', s) > ['abcd', '2343', ' abw', '34324', ' abc', '3243', '-', '23', 'A ', '123']
Split
3,340,081
61
I am using split() to tokenize a String separated with * following this format: name*lastName*ID*school*age % name*lastName*ID*school*age % name*lastName*ID*school*age I'm reading this from a file named "entrada.al" using this code: static void leer() { try { String ruta="entrada.al"; File myFile = new File (ruta); FileReader fileReader = new FileReader(myFile); BufferedReader reader = new BufferedReader(fileReader); String line = null; while ((line=reader.readLine())!=null){ if (!(line.equals("%"))){ String [] separado = line.split("*"); //SPLIT CALL names.add(separado[0]); lastNames.add(separado[1]); ids.add(separado[2]); ages.add(separado[3]); } } reader.close(); } And I'm getting this exception: Exception in thread "main" java.util.regex.PatternSyntaxException: Dangling meta character '*' near index 0 * My guess is that the lack of a * after age on the original text file is causing this. How do I get around it?
No, the problem is that * is a reserved character in regexes, so you need to escape it. String [] separado = line.split("\\*"); * means "zero or more of the previous expression" (see the Pattern Javadocs), and you weren't giving it any previous expression, making your split expression illegal. This is why the error was a PatternSyntaxException.
Split
917,822
61
I have a table with this values: Articles/Search/ArtMID/2681/ArticleID/2218/Diet.aspx OurStory/MeettheFoodieandtheMD.aspx TheFood/OurMenu.aspx I want to get this Diet.aspx MeettheFoodieandtheMD.aspx OurMenu.aspx How can i do this?
The way to do it in SQL : SELECT SUBSTRING( string , LEN(string) - CHARINDEX('/',REVERSE(string)) + 2 , LEN(string) ) FROM SAMPLE; JSFiddle here http://sqlfiddle.com/#!3/41ead/11
Split
14,412,898
60
For example, there is a string val s = "Test". How do you separate it into t, e, s, t?
Do you need characters? "Test".toList // Makes a list of characters "Test".toArray // Makes an array of characters Do you need bytes? "Test".getBytes // Java provides this Do you need strings? "Test".map(_.toString) // Vector of strings "Test".sliding(1).toList // List of strings "Test".sliding(1).toArray // Array of strings Do you need UTF-32 code points? Okay, that's a tougher one. def UTF32point(s: String, idx: Int = 0, found: List[Int] = Nil): List[Int] = { if (idx >= s.length) found.reverse else { val point = s.codePointAt(idx) UTF32point(s, idx + java.lang.Character.charCount(point), point :: found) } } UTF32point("Test")
Split
5,052,042
60
What is a good way to do some_string.split('') in python? This syntax gives an error: a = '1111' a.split('') ValueError: empty separator I would like to obtain: ['1', '1', '1', '1']
Use list(): >>> list('1111') ['1', '1', '1', '1'] Alternatively, you can use map() (Python 2.7 only): >>> map(None, '1111') ['1', '1', '1', '1'] Time differences: $ python -m timeit "list('1111')" 1000000 loops, best of 3: 0.483 usec per loop $ python -m timeit "map(None, '1111')" 1000000 loops, best of 3: 0.431 usec per loop
Split
17,380,592
59
I don't understand this behaviour: var string = 'a,b,c,d,e:10.'; var array = string.split ('.'); I expect this: console.log (array); // ['a,b,c,d,e:10'] console.log (array.length); // 1 but I get this: console.log (array); // ['a,b,c,d,e:10', ''] console.log (array.length); // 2 Why two elements are returned instead of one? How does split work? Is there another way to do this?
You could add a filter to exclude the empty string. var string = 'a,b,c,d,e:10.'; var array = string.split ('.').filter(function(el) {return el.length != 0});
Split
12,836,062
59
In JS if you would like to split user entry into an array what is the best way of going about it? For example: entry = prompt("Enter your name") for (i=0; i<entry.length; i++) { entryArray[i] = entry.charAt([i]); } // entryArray=['j', 'e', 'a', 'n', 's', 'y'] after loop Perhaps I'm going about this the wrong way - would appreciate any help!
Use the .split() method. When specifying an empty string as the separator, the split() method will return an array with one element per character. entry = prompt("Enter your name") entryArray = entry.split("");
Split
7,979,727
59
In SQL Server 2016 I receive this error with STRING_SPLIT function SELECT * FROM STRING_SPLIT('a,b,c',',') Error: Invalid object name 'STRING_SPLIT'.
Make sure that the database compatibility level is 130 you can use the following query to change it: ALTER DATABASE [DatabaseName] SET COMPATIBILITY_LEVEL = 130 As mentioned in the comments, you can check the current compatibility level of a database using the following command: SELECT compatibility_level FROM sys.databases WHERE name = 'Your-Database-Name';
Split
47,205,829
58
I have file that contains values separated by tab ("\t"). I am trying to create a list and store all values of file in the list. But I get some problem. Here is my code. line = "abc def ghi" values = line.split("\t") It works fine as long as there is only one tab between each value. But if there is one than one tab then it copies the tab to values as well. In my case mostly the extra tab will be after the last value in the file.
You can use regex here: >>> import re >>> strs = "foo\tbar\t\tspam" >>> re.split(r'\t+', strs) ['foo', 'bar', 'spam'] update: You can use str.rstrip to get rid of trailing '\t' and then apply regex. >>> yas = "yas\t\tbs\tcda\t\t" >>> re.split(r'\t+', yas.rstrip('\t')) ['yas', 'bs', 'cda']
Split
17,038,426
58
I want to open my file.txt and split all data from this file. Here is my file.txt: some_data1 some_data2 some_data3 some_data4 some_data5 and here is my python code: >>>file_txt = open("file.txt", 'r') >>>data = file_txt.read() >>>data_list = data.split(' ') >>>print data some_data1 some_data2 some_data3 some_data4 some_data5 >>>print data_list ['\xef\xbb\xbfsome_data1', 'some_data1', "some_data1", 'some_data1', 'some_data1\n'] As you can see here, when I print my data_list it adds to my list this: \xef\xbb\xbf and this: \n. What are these and how can I clean my list from them. Thanks.
Your file contains UTF-8 BOM in the beginning. To get rid of it, first decode your file contents to unicode. fp = open("file.txt") data = fp.read().decode("utf-8-sig").encode("utf-8") But better don't encode it back to utf-8, but work with unicoded text. There is a good rule: decode all your input text data to unicode as soon as possible, and work only with unicode; and encode the output data to the required encoding as late as possible. This will save you from many headaches. To read bigger files in a certain encoding, use io.open or codecs.open. Also check this. Use str.strip() or str.rstrip() to get rid of the newline character \n.
Split
18,664,712
57
Given: String input = "one two three four five six seven"; Is there a regex that works with String.split() to grab (up to) two words at a time, such that: String[] pairs = input.split("some regex"); System.out.println(Arrays.toString(pairs)); results in this: [one two, three four, five six, seven] This question is about the split regex. It is not about "finding a work-around" or other "making it work in another way" solutions.
Currently (last tested on Java 17) it is possible to do it with split(), but in real world don't use this approach since it looks like it is based on bug since look-behind in Java should have obvious maximum length, but this solution uses \w+ which doesn't respect this limitation and somehow still works - so if it is a bug which will be fixed in later releases this solution will stop working. Instead use Pattern and Matcher classes with regex like \w+\s+\w+ which aside from being safer also avoids maintenance hell for person who will inherit such code (remember to "Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live"). Is this what you are looking for? (you can replace \\w with \\S to include all non-space characters but for this example I will leave \\w since it is easier to read regex with \\w\\s then \\S\\s) String input = "one two three four five six seven"; String[] pairs = input.split("(?<!\\G\\w+)\\s"); System.out.println(Arrays.toString(pairs)); output: [one two, three four, five six, seven] \G is previous match, (?<!regex) is negative lookbehind. In split we are trying to find spaces -> \\s that are not predicted -> (?<!negativeLookBehind) by some word -> \\w+ with previously matched (space) -> \\G before it ->\\G\\w+. Only confusion that I had at start was how would it work for first space since we want that space to be ignored. Important information is that \\G at start matches start of the String ^. So before first iteration regex in negative look-behind will look like (?<!^\\w+) and since first space do have ^\\w+ before, it can't be match for split. Next space will not have this problem, so it will be matched and informations about it (like its position in input String) will be stored in \\G and used later in next negative look-behind. So for 3rd space regex will check if there is previously matched space \\G and word \\w+ before it. Since result of this test will be positive, negative look-behind wont accept it so this space wont be matched, but 4th space wont have this problem because space before it wont be the same as stored in \\G (it will have different position in input String). Also if someone would like to separate on lets say every 3rd space you can use this form (based on @maybeWeCouldStealAVan's answer which was deleted when I posted this fragment of answer) input.split("(?<=\\G\\w{1,100}\\s\\w{1,100}\\s\\w{1,100})\\s") Instead of 100 you can use some bigger value that will be at least the size of length of longest word in String. I just noticed that we can also use + instead of {1,maxWordLength} if we want to split with every odd number like every 3rd, 5th, 7th for example String data = "0,0,1,2,4,5,3,4,6,1,3,3,4,5,1,1"; String[] array = data.split("(?<=\\G\\d+,\\d+,\\d+,\\d+,\\d+),");//every 5th comma
Split
16,485,687
57
Possible Duplicate: substring between two delimiters I have a string like "ABC[ This is to extract ]" I want to extract the part "This is to extract" in java. I am trying to use split, but it is not working the way I want. Does anyone have suggestion?
If you have just a pair of brackets ( [] ) in your string, you can use indexOf(): String str = "ABC[ This is the text to be extracted ]"; String result = str.substring(str.indexOf("[") + 1, str.indexOf("]"));
Split
13,796,451
57
I have some input that looks like the following: A,B,C,"D12121",E,F,G,H,"I9,I8",J,K The comma-separated values can be in any order. I'd like to split the string on commas; however, in the case where something is inside double quotation marks, I need it to both ignore commas and strip out the quotation marks. So basically, the output would be this list of strings: ['A', 'B', 'C', 'D12121', 'E', 'F', 'G', 'H', 'I9,I8', 'J', 'K']
Lasse is right; it's a comma separated value file, so you should use the csv module. A brief example: from csv import reader # test infile = ['A,B,C,"D12121",E,F,G,H,"I9,I8",J,K'] # real is probably like # infile = open('filename', 'r') # or use 'with open(...) as infile:' and indent the rest for line in reader(infile): print line # for the test input, prints # ['A', 'B', 'C', 'D12121', 'E', 'F', 'G', 'H', 'I9,I8', 'J', 'K']
Split
8,069,975
57
I would like to split a string with delimiters but keep the delimiters in the result. How would I do this in C#?
If the split chars were ,, ., and ;, I'd try: using System.Text.RegularExpressions; ... string[] parts = Regex.Split(originalString, @"(?<=[.,;])") (?<=PATTERN) is positive look-behind for PATTERN. It should match at any place where the preceding text fits PATTERN so there should be a match (and a split) after each occurrence of any of the characters.
Split
4,680,128
57
I've used multiple ways of splitting and stripping the strings in my pandas dataframe to remove all the '\n'characters, but for some reason it simply doesn't want to delete the characters that are attached to other words, even though I split them. I have a pandas dataframe with a column that captures text from web pages using Beautifulsoup. The text has been cleaned a bit already by beautifulsoup, but it failed in removing the newlines attached to other characters. My strings look a bit like this: "hands-on\ndevelopment of games. We will study a variety of software technologies\nrelevant to games including programming languages, scripting\nlanguages, operating systems, file systems, networks, simulation\nengines, and multi-media design systems. We will also study some of\nthe underlying scientific concepts from computer science and related\nfields including" Is there an easy python way to remove these "\n" characters?
EDIT: the correct answer to this is: df = df.replace(r'\n',' ', regex=True) I think you need replace: df = df.replace('\n','', regex=True) Or: df = df.replace('\n',' ', regex=True) Or: df = df.replace(r'\\n',' ', regex=True) Sample: text = '''hands-on\ndev nologies\nrelevant scripting\nlang ''' df = pd.DataFrame({'A':[text]}) print (df) A 0 hands-on\ndev nologies\nrelevant scripting\nla... df = df.replace('\n',' ', regex=True) print (df) A 0 hands-on dev nologies relevant scripting lang
Split
44,227,748
55
Is there a Python-way to split a string after the nth occurrence of a given delimiter? Given a string: '20_231_myString_234' It should be split into (with the delimiter being '_', after its second occurrence): ['20_231', 'myString_234'] Or is the only way to accomplish this to count, split and join?
>>> n = 2 >>> groups = text.split('_') >>> '_'.join(groups[:n]), '_'.join(groups[n:]) ('20_231', 'myString_234') Seems like this is the most readable way, the alternative is regex)
Split
17,060,039
55
Right now I'm doing a split on a string and assuming that the newline from the user is \r\n like so: string.split(/\r\n/) What I'd like to do is split on either \r\n or just \n. So how what would the regex be to split on either of those?
Did you try /\r?\n/ ? The ? makes the \r optional. Example usage: http://rubular.com/r/1ZuihD0YfF
Split
6,551,128
55
Is there a way to split a string into 2 equal halves without using a loop in Python?
Python 2: firstpart, secondpart = string[:len(string)/2], string[len(string)/2:] Python 3: firstpart, secondpart = string[:len(string)//2], string[len(string)//2:]
Split
4,789,601
55
How can I write a Ruby function that splits the input by any kind of whitespace, and remove all the whitespace from the result? For example, if the input is aa bbb cc dd ee Then return an array ["aa", "bbb", "cc", "dd", "ee"].
This is the default behavior of String#split: input = <<-TEXT aa bbb cc dd ee TEXT input.split Result: ["aa", "bbb", "cc", "dd", "ee"] This works in all versions of Ruby that I tested, including 1.8.7, 1.9.3, 2.0.0, and 2.1.2.
Split
13,537,920
54
Here is the current code in my application: String[] ids = str.split("/"); When profiling the application, a non-negligeable time is spent string splitting. Also, the split method takes a regular expression, which is superfluous here. What alternative can I use in order to optimize the string splitting? Is StringUtils.split faster? (I would've tried and tested myself but profiling my application takes a lot of time.)
String.split(String) won't create regexp if your pattern is only one character long. When splitting by single character, it will use specialized code which is pretty efficient. StringTokenizer is not much faster in this particular case. This was introduced in OpenJDK7/OracleJDK7. Here's a bug report and a commit. I've made a simple benchmark here. $ java -version java version "1.8.0_20" Java(TM) SE Runtime Environment (build 1.8.0_20-b26) Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode) $ java Split split_banthar: 1231 split_tskuzzy: 1464 split_tskuzzy2: 1742 string.split: 1291 StringTokenizer: 1517
Split
11,001,330
54
The Java documentation doesn't seem to mention anything about deprecation for StringTokenizer, yet I keep hearing about how it was deprecated long ago. Was it deprecated because it had bugs/errors, or is String.split() simply better to use overall? I have some code that uses StringTokenizer and I am wondering if I should seriously be concerned about refactoring it to use String.split(), or whether the deprecation is purely a matter of convenience and my code is safe.
Java 10 String Tokenizer -- not deprecated Java 9 String Tokenizer -- not deprecated Java 8 String Tokenizer -- not deprecated Java 7 String Tokenizer -- not deprecated Java 6 String Tokenizer -- not deprecated Java 5 String Tokenizer -- not deprecated If it is not marked as deprecated, it is not going away.
Split
6,983,856
54
I have a formatted string from a log file, which looks like: >>> a="test result" That is, the test and the result are split by some spaces - it was probably created using formatted string which gave test some constant spacing. Simple splitting won't do the trick: >>> a.split(" ") ['test', '', '', '', ... '', '', '', '', '', '', '', '', '', '', '', 'result'] split(DELIMITER, COUNT) cleared some unnecessary values: >>> a.split(" ",1) ['test', ' result'] This helped - but of course, I really need: ['test', 'result'] I can use split() followed by map + strip(), but I wondered if there is a more Pythonic way to do it. Thanks, Adam UPDATE: Such a simple solution! Thank you all.
Just do not give any delimeter? >>> a="test result" >>> a.split() ['test', 'result']
Split
2,492,415
54
I have a to do this: AccountList.Split(vbCrLf) In c# AccountList is a string. How can i do? thanks
You are looking for System.Environment.NewLine. On Windows, this is equivalent to \r\n though it could be different under another .NET implementation, such as Mono on Linux, for example.
Split
2,401,786
54
Let's say I have a column which has values like: foo/bar chunky/bacon/flavor /baz/quz/qux/bax I.e. a variable number of strings separated by /. In another column I want to get the last element from each of these strings, after they have been split on /. So, that column would have: bar flavor bax I can't figure this out. I can split on / and get an array, and I can see the function INDEX to get a specific numbered indexed element from the array, but can't find a way to say "the last element" in this function.
Edit: this one is simplier: =REGEXEXTRACT(A1,"[^/]+$") You could use this formula: =REGEXEXTRACT(A1,"(?:.*/)(.*)$") And also possible to use it as ArrayFormula: =ARRAYFORMULA(REGEXEXTRACT(A1:A3,"(?:.*/)(.*)$")) Here's some more info: the RegExExtract function Some good examples of syntax my personal list of Regex Tricks This formula will do the same: =INDEX(SPLIT(A1,"/"),LEN(A1)-len(SUBSTITUTE(A1,"/",""))) But it takes A1 three times, which is not prefferable.
Split
37,390,009
52
Afternoon, I need to split an array into smaller "chunks". I am passing over about 1200 items, and need to split these into easier to handle arrays of 100 items each, which I then need to process. Could anyone please make some suggestions?
Array.Copy has been around since 1.1 and does an excellent job of chunking arrays. List.GetRange() would also be a good choice as mentioned in another answer. string[] buffer; for(int i = 0; i < source.Length; i+=100) { buffer = new string[100]; Array.Copy(source, i, buffer, 0, 100); // process array } And to make an extension for it: public static class Extensions { public static T[] Slice<T>(this T[] source, int index, int length) { T[] slice = new T[length]; Array.Copy(source, index, slice, 0, length); return slice; } } And to use the extension: string[] source = new string[] { 1200 items here }; // get the first 100 string[] slice = source.Slice(0, 100); Update: I think you might be wanting ArraySegment<> No need for performance checks, because it simply uses the original array as its source and maintains an Offset and Count property to determine the 'segment'. Unfortunately, there isn't a way to retrieve JUST the segment as an array, so some folks have written wrappers for it, like here: ArraySegment - Returning the actual segment C# ArraySegment<string> segment; for (int i = 0; i < source.Length; i += 100) { segment = new ArraySegment<string>(source, i, 100); // and to loop through the segment for (int s = segment.Offset; s < segment.Array.Length; s++) { Console.WriteLine(segment.Array[s]); } } Performance of Array.Copy vs Skip/Take vs LINQ Test method (in Release mode): static void Main(string[] args) { string[] source = new string[1000000]; for (int i = 0; i < source.Length; i++) { source[i] = "string " + i.ToString(); } string[] buffer; Console.WriteLine("Starting stop watch"); Stopwatch sw = new Stopwatch(); for (int n = 0; n < 5; n++) { sw.Reset(); sw.Start(); for (int i = 0; i < source.Length; i += 100) { buffer = new string[100]; Array.Copy(source, i, buffer, 0, 100); } sw.Stop(); Console.WriteLine("Array.Copy: " + sw.ElapsedMilliseconds.ToString()); sw.Reset(); sw.Start(); for (int i = 0; i < source.Length; i += 100) { buffer = new string[100]; buffer = source.Skip(i).Take(100).ToArray(); } sw.Stop(); Console.WriteLine("Skip/Take: " + sw.ElapsedMilliseconds.ToString()); sw.Reset(); sw.Start(); String[][] chunks = source .Select((s, i) => new { Value = s, Index = i }) .GroupBy(x => x.Index / 100) .Select(grp => grp.Select(x => x.Value).ToArray()) .ToArray(); sw.Stop(); Console.WriteLine("LINQ: " + sw.ElapsedMilliseconds.ToString()); } Console.ReadLine(); } Results (in milliseconds): Array.Copy: 15 Skip/Take: 42464 LINQ: 881 Array.Copy: 21 Skip/Take: 42284 LINQ: 585 Array.Copy: 11 Skip/Take: 43223 LINQ: 760 Array.Copy: 9 Skip/Take: 42842 LINQ: 525 Array.Copy: 24 Skip/Take: 43134 LINQ: 638
Split
11,207,526
52
I am trying to split a column into multiple columns based on comma/space separation. My dataframe currently looks like KEYS 1 0 FIT-4270 4000.0439 1 FIT-4269 4000.0420, 4000.0471 2 FIT-4268 4000.0419 3 FIT-4266 4000.0499 4 FIT-4265 4000.0490, 4000.0499, 4000.0500, 4000.0504, I would like KEYS 1 2 3 4 0 FIT-4270 4000.0439 1 FIT-4269 4000.0420 4000.0471 2 FIT-4268 4000.0419 3 FIT-4266 4000.0499 4 FIT-4265 4000.0490 4000.0499 4000.0500 4000.0504 My code currently removes The KEYS column and I'm not sure why. Could anyone improve or help fix the issue? v = dfcleancsv[1] #splits the columns by spaces into new columns but removes KEYS? dfcleancsv = dfcleancsv[1].str.split(' ').apply(Series, 1)
In case someone else wants to split a single column (deliminated by a value) into multiple columns - try this: series.str.split(',', expand=True) This answered the question I came here looking for. Credit to EdChum's code that includes adding the split columns back to the dataframe. pd.concat([df[[0]], df[1].str.split(', ', expand=True)], axis=1) Note: The first argument df[[0]] is DataFrame. The second argument df[1].str.split is the series that you want to split. split Documentation concat Documentation
Split
37,600,711
51
I was following this article And I came up with this code: string FileName = "C:\\test.txt"; using (StreamReader sr = new StreamReader(FileName, Encoding.Default)) { string[] stringSeparators = new string[] { "\r\n" }; string text = sr.ReadToEnd(); string[] lines = text.Split(stringSeparators, StringSplitOptions.None); foreach (string s in lines) { Console.WriteLine(s); } } Here is the sample text: somet interesting text\n some text that should be in the same line\r\n some text should be in another line Here is the output: somet interesting text\r\n some text that should be in the same line\r\n some text should be in another line\r\n But what I want is this: somet interesting textsome text that should be in the same line\r\n some text should be in another line\r\n I think I should get this result but somehow I am missing something...
The problem is not with the splitting but rather with the WriteLine. A \n in a string printed with WriteLine will produce an "extra" line. Example var text = "somet interesting text\n" + "some text that should be in the same line\r\n" + "some text should be in another line"; string[] stringSeparators = new string[] { "\r\n" }; string[] lines = text.Split(stringSeparators, StringSplitOptions.None); Console.WriteLine("Nr. Of items in list: " + lines.Length); // 2 lines foreach (string s in lines) { Console.WriteLine(s); //But will print 3 lines in total. } To fix the problem remove \n before you print the string. Console.WriteLine(s.Replace("\n", ""));
Split
22,185,009
51
I am trying to split up a string by caps using Javascript, Examples of what Im trying to do: "HiMyNameIsBob" -> "Hi My Name Is Bob" "GreetingsFriends" -> "Greetings Friends" I am aware of the str.split() method, however I am not sure how to make this function work with capital letters. I've tried: str.split("(?=\\p{Upper})") Unfortunately that doesn't work, any help would be great.
Use RegExp-literals, a look-ahead and [A-Z]: console.log( // -> "Hi My Name Is Bob" window.prompt('input string:', "HiMyNameIsBob").split(/(?=[A-Z])/).join(" ") )
Split
10,064,683
51
I have table with data 1/1 to 1/20 in one column. I want the value 1 to 20 i.e value after '/'(front slash) is updated into other column in same table in SQL Server. Example: Column has value 1/1,1/2,1/3...1/20 new Column value 1,2,3,..20 That is, I want to update this new column.
Try this: UPDATE YourTable SET Col2 = RIGHT(Col1,LEN(Col1)-CHARINDEX('/',Col1))
Split
9,260,044
51
I have a string and I would like to split that string by delimiter at a certain position. For example, my String is F/P/O and the result I am looking for is: Therefore, I would like to separate the string by the furthest delimiter. Note: some of my strings are F/O also for which my SQL below works fine and returns desired result. The SQL I wrote is as follows: SELECT Substr('F/P/O', 1, Instr('F/P/O', '/') - 1) part1, Substr('F/P/O', Instr('F/P/O', '/') + 1) part2 FROM dual and the result is: Why is this happening and how can I fix it?
Therefore, I would like to separate the string by the furthest delimiter. I know this is an old question, but this is a simple requirement for which SUBSTR and INSTR would suffice. REGEXP are still slower and CPU intensive operations than the old subtsr and instr functions. SQL> WITH DATA AS 2 ( SELECT 'F/P/O' str FROM dual 3 ) 4 SELECT SUBSTR(str, 1, Instr(str, '/', -1, 1) -1) part1, 5 SUBSTR(str, Instr(str, '/', -1, 1) +1) part2 6 FROM DATA 7 / PART1 PART2 ----- ----- F/P O As you said you want the furthest delimiter, it would mean the first delimiter from the reverse. You approach was fine, but you were missing the start_position in INSTR. If the start_position is negative, the INSTR function counts back start_position number of characters from the end of string and then searches towards the beginning of string.
Split
26,878,291
50
let string = "hello hi" var hello = "" var hi = "" I wan't to split my string so that the value of hello get "hello" and the value of hi get "hi"
Try this: var myString: String = "hello hi"; var myStringArr = myString.componentsSeparatedByString(" ") Where myString is the name of your string, and myStringArr contains the components separated by the space. Then you can get the components as: var hello: String = myStringArr [0] var hi: String = myStringArr [1] Doc: componentsSeparatedByString EDIT: For Swift 3, the above will be: var myStringArr = myString.components(separatedBy: " ") Doc: components(separatedBy:)
Split
25,818,197
50
I need help splitting a string in javascript by space (" "), ignoring space inside quotes expression. I have this string: var str = 'Time:"Last 7 Days" Time:"Last 30 Days"'; I would expect my string to be split to 2: ['Time:"Last 7 Days"', 'Time:"Last 30 Days"'] but my code splits to 4: ['Time:', '"Last 7 Days"', 'Time:', '"Last 30 Days"'] this is my code: str.match(/(".*?"|[^"\s]+)(?=\s*|\s*$)/g); Thanks!
s = 'Time:"Last 7 Days" Time:"Last 30 Days"' s.match(/(?:[^\s"]+|"[^"]*")+/g) // -> ['Time:"Last 7 Days"', 'Time:"Last 30 Days"'] Explained: (?: # non-capturing group [^\s"]+ # anything that's not a space or a double-quote | # or… " # opening double-quote [^"]* # …followed by zero or more chacacters that are not a double-quote " # …closing double-quote )+ # each match is one or more of the things described in the group Turns out, to fix your original expression, you just need to add a + on the group: str.match(/(".*?"|[^"\s]+)+(?=\s*|\s*$)/g) # ^ here.
Split
16,261,635
49
In Swift, it's easy to split a string on a character and return the result in an array. What I'm wondering is if you can split a string by another string instead of just a single character, like so... let inputString = "This123Is123A123Test" let splits = inputString.split(onString:"123") // splits == ["This", "Is", "A", "Test"] I think NSString may have a way to do as much, and of course I could roll my own in a String extension, but I'm looking to see if Swift has something natively.
import Foundation let inputString = "This123Is123A123Test" let splits = inputString.components(separatedBy: "123")
Split
49,472,570
48
I have a huge CSV with many tables with many rows. I would like to simply split each dataframe into 2 if it contains more than 10 rows. If true, I would like the first dataframe to contain the first 10 and the rest in the second dataframe. Is there a convenient function for this? I've looked around but found nothing useful... i.e. split_dataframe(df, 2(if > 10))?
I used a List Comprehension to cut a huge DataFrame into blocks of 100'000: size = 100000 list_of_dfs = [df.loc[i:i+size-1,:] for i in range(0, len(df),size)] or as generator: list_of_dfs = (df.loc[i:i+size-1,:] for i in range(0, len(df),size))
Split
25,290,757
48
I am trying to get and array of text after using the text function, having a ul list like <ul> <li>one <ul> <li>one one</li> <li>one two</li> </ul> </li> </ul> then get something like ['one', 'one one', 'one two']
var array = $('li').map(function(){ return $.trim($(this).text()); }).get(); http://api.jquery.com/map/ Demo Fiddle --> http://jsfiddle.net/EC4yq/
Split
16,570,564
48
I would like to split a string only where there are at least two or more whitespaces. For example str = '10DEUTSCH GGS Neue Heide 25-27 Wahn-Heide -1 -1' print(str.split()) Results: ['10DEUTSCH', 'GGS', 'Neue', 'Heide', '25-27', 'Wahn-Heide', '-1', '-1'] I would like it to look like this: ['10DEUTSCH', 'GGS Neue Heide 25-27', 'Wahn-Heide', '-1', '-1']
>>> import re >>> text = '10DEUTSCH GGS Neue Heide 25-27 Wahn-Heide -1 -1' >>> re.split(r'\s{2,}', text) ['10DEUTSCH', 'GGS Neue Heide 25-27', 'Wahn-Heide', '-1', '-1'] Where \s matches any whitespace character, like \t\n\r\f\v and more {2,} is a repetition, meaning "2 or more"
Split
12,866,631
47
I have a JSON feed connected to my app. One of the items is lat & long separated by a comma. For example: "32.0235, 1.345". I'm trying to split this up into two separate values by splitting at the comma. Any advice? Thanks!!
NSArray *strings = [coords componentsSeparatedByString:@","];
Split
6,443,535
47
In Tinkerpop 3, how to perform pagination? I want to fetch the first 10 elements of a query, then the next 10 without having to load them all in memory. For example, the query below returns 1000,000 records. I want to fetch them 10 by 10 without loading all the 1000,000 at once. g.V().has("key", value).limit(10) Edit A solution that works through HttpChannelizer on Gremlin Server would be ideal.
From a functional perspective, a nice looking bit of Gremlin for paging would be: gremlin> g.V().hasLabel('person').fold().as('persons','count'). select('persons','count'). by(range(local, 0, 2)). by(count(local)) ==>[persons:[v[1],v[2]],count:4] gremlin> g.V().hasLabel('person').fold().as('persons','count'). select('persons','count'). by(range(local, 2, 4)). by(count(local)) ==>[persons:[v[4],v[6]],count:4] In this way you get the total count of vertices with the result. Unfortunately, the fold() forces you to count all the vertices which will require iterating them all (i.e. bringing them all into memory). There really is no way to avoid iterating all 100,000 vertices in this case as long as you intend to execute your traversal in multiple separate attempts. For example: gremlin> g.V().hasLabel('person').range(0,2) ==>v[1] ==>v[2] gremlin> g.V().hasLabel('person').range(2,4) ==>v[4] ==>v[6] The first statement is the same as if you'd terminated the traversal with limit(2). On the second traversal, that only wants the second two vertices, it not as though you magically skip iterating the first two as it is a new traversal. I'm not aware of any TinkerPop graph database implementation that will do that efficiently - they all have that behavior. The only way to do ten vertices at a time without having them all in memory is to use the same Traversal instance as in: gremlin> t = g.V().hasLabel('person');[] gremlin> t.next(2) ==>v[1] ==>v[2] gremlin> t.next(2) ==>v[4] ==>v[6] With that model you only iterate the vertices once and don't bring them all into memory at a single point in time. Some other thoughts on this topic can be found in this blog post.
Gremlin
39,826,983
23
I'm trying to understand how this pattern for a conditional insert works: g.V() .hasLabel('person').has('name', 'John') .fold() .coalesce( __.unfold(), g.addV('person').property('name', 'John') ).next(); What is the purpose of the fold/unfold? Why are these necessary, and why does this not work: g.V() .coalesce( __.hasLabel('person').has('name', 'John'), g.addV('person').property('name', 'John') ).next(); The fold-then-unfold pattern seems redundant to me and yet the above does not yield the same result.
Consider what happens when you just do the following: gremlin> g = TinkerFactory.createModern().traversal() ==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard] gremlin> g.V().has('name','marko') ==>v[1] gremlin> g.V().has('name','stephen') gremlin> For "marko" you return something and for "stephen" you do not. The "stephen" case is the one to pay attention to because that is the one where the fold() truly becomes important in this pattern. When that traversal returns nothing, any steps you add after that will not have a Traverser present to trigger actions in those steps. Therefore even the following will not add a vertex: gremlin> g.V().has('name','stephen').addV('person') gremlin> But looks what happens if we fold(): gremlin> g.V().has('name','stephen').fold() ==>[] fold() is a reducing barrier step and will thus eagerly evaluate the traversal up to that point and return the contents as a List even if the contents of that traversal up to that point yield nothing (in which case, as you can see, you get an empty list). And if you have an empty List that empty List is a Traverser flowing through the traversal and therefore future steps will fire: gremlin> g.V().has('name','stephen').fold().addV('person') ==>v[13] So that explains why we fold() because we are checking for existence of "John" in your example and if he's found then he will exist in the List and when that List with "John" hits coalesce() its first check will be to unfold() that List with "John" and return that Vertex - done. If the List is empty and returns nothing because "John" does not exist then it will add the vertex (by the way, you don't need the "g." in front of addV(), it should just be an anonymous traversal and thus __.addV('person')). Turning to your example, I would first point out that I think you wanted to ask about this: g.V(). coalesce( __.has('person','name', 'John'), __.addV('person').property('name', 'John')) This is a completely different query. In this traversal, you're saying iterate all the vertices and for each one execute what is in the coalesce(). You can see this fairly plainly by replacing the addV() with constant('x'): gremlin> g = TinkerFactory.createModern().traversal() ==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard] gremlin> g.V(). ......1> coalesce( ......2> has('person','name', 'John'), ......3> constant('x')) ==>x ==>x ==>x ==>x ==>x ==>x gremlin> g.V(). ......1> coalesce( ......2> has('person','name', 'marko'), ......3> constant('x')) ==>v[1] ==>x ==>x ==>x ==>x ==>x Now, imagine what happens with addV() and "John". It will call addV() 6 times, once for each vertex it comes across that is not "John": gremlin> g.V(). ......1> coalesce( ......2> __.has('person','name', 'John'), ......3> __.addV('person').property('name', 'John')) ==>v[13] ==>v[15] ==>v[17] ==>v[19] ==>v[21] ==>v[23] Personally, I like the idea of wrapping up this kind of logic in a Gremlin DSL - there is a good example of doing so here. Nice question - I've described the "Element Existence" issue as part of a Gremlin Recipe that can be read here. UPDATE: As of TinkerPop 3.6.0, the fold()/coalesce()/unfold() pattern has been largely replaced by the new steps of mergeV() and mergeE() which greatly simplify the Gremlin required to do an upsert-like operation.
Gremlin
51,784,430
22
I've been playing with Titan graph server for a while now. And my feeling is that, despite an extensive documentation, there is a lack of Getting started from scratch tutorial. My final goal is to have a titan running on cassandra and query with StartTheShift/thunderdome. I have seen few ways of starting Titan: Using Rexster from this link, I was able to run a titan server with the following steps: download rexster-server 2.3 download titan 0.3.0 copy all files from titan-all-0.3.0/libs to rexster-server-2.3.0/ext/titan edit rexster-server-2.3.0/rexster.xml and add (between a ): <graph> <graph-name>geograph</graph-name> <graph-type>com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration</graph-type> <graph-read-only>false</graph-read-only> <graph-location>/Users/vallette/projects/DATA/gdb</graph-location> <properties> <storage.backend>local</storage.backend> <storage.directory>/Users/vallette/projects/DATA/gdb</storage.directory> <buffer-size>100</buffer-size> </properties> <extensions> <allows> <allow>tp:gremlin</allow> </allows> </extensions> </graph> for a berkeleydb or: <graph> <graph-name>geograph</graph-name> <graph-type>com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration</graph-type> <graph-location></graph-location> <graph-read-only>false</graph-read-only> <properties> <storage.backend>cassandra</storage.backend> <storage.hostname>77.77.77.77</storage.hostname> </properties> <extensions> <allows> <allow>tp:gremlin</allow> </allows> </extensions> </graph> for a cassandra db. launch the server with ./bin/rexster.sh -s -c rexster.xml dowload rexster console and run it with bin/rexster-console.sh you can now connect to your graph with g = rexster.getGraph("geograph") The problem with this method is that you are connected via rexster and not gremlin so you do not have autocompletion. The advantage is that you can name your database (here geograph). Using Titan server with cassandra start the server with ./bin/titan.sh config/titan-server-rexster.xml config/titan-server-cassandra.properties create a file called cassandra.local with storage.backend=cassandrathrift storage.hostname=127.0.0.1 start titan gremlin and connect with g = TitanFactory.open("cassandra-es.local") this works fine. Using titan server with BerkeleyDB From this link: download titan 0.3.0 start the server with ./bin/titan.sh config/titan-server-rexster.xml config/titan-server-berkeleydb.properties launch titan gremlin: ./bin/gremlin.sh but once I try to connect to the database (graph) in gremlin with g = TitanFactory.open('graph') it creates a new database called graph in the directory I'm in. If i execute this where my directory (filled) is I get: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.berkeleyje.BerkeleyJEStoreManager Could someone clarify these process, and tell me what I'm doing wrong. Thanks
According to the documentation TitanFactory.open() takes either the name of a config file or the name of a directory to open or create a database in. If what steven says is true, there would be two ways to connect to the database with a BerkelyDB backend: Start the database through bin/titan.sh. Connect to the database through the rexster console. DON'T start the database using bin/titan.sh. Use the gremlin console instead: TitanFactory.open("database-location"). This will open the database. But this does not have a rexster server. Nothing else will be able to access the database but the gremlin console.
Gremlin
16,397,426
22
I am new to Gremlin query language. I have to insert data on a Cosmos DB graph (using Gremlin.Net package), whether the Vertex (or Edge) already exists in the graph or not. If the data exists, I only need to update the properties. I wanted to use this kind of pattern: g.V().hasLabel('event').has('id','1').tryNext().orElseGet {g.addV('event').has('id','1')} But it is not supported by Gremlin.Net / Cosmos DB graph API. Is there a way to make a kind of upsert query in a single query ? Thanks in advance.
There are a number of ways to do this but I think that the TinkerPop community has generally settled on this approach: g.V().has('event','id','1'). fold(). coalesce(unfold(), addV('event').property('id','1')) Basically, it looks for the "event" with has() and uses fold() step to coerce to a list. The list will either be empty or have a Vertex in it. Then with coalesce(), it tries to unfold() the list and if it has a Vertex that is immediately returned otherwise, it does the addV(). If the idea is to update existing properties if the element is found, just add property() steps after the coalesce(): g.V().has('event','id','1'). fold(). coalesce(unfold(), addV('event').property('id','1')). property('description','This is an event') If you need to know if the vertex returned was "new" or not then you could do something like this: g.V().has('event','id','1'). fold(). coalesce(unfold(). project('vertex','exists'). by(identity()). by(constant(true)), addV('event').property('id','1'). project('vertex','exists'). by(identity()). by(constant(false))) Additional reading on this topic can be found on this question: "Why do you need to fold/unfold using coalesce for a conditional insert?" Also note that optional edge insertion is described here: "Add edge if not exist using gremlin". As a final note, while this question was asked regarding CosmosDB, the answer generally applies to all TinkerPop-enabled graphs. Of course, how a graph optimizes this Gremlin is a separate question. If a graph has native upsert capabilities, that capability may or may not be used behind the scenes of this Gremlin so there may be better ways to implement upsert by way of the graphs systems native API (of course, choosing that path reduces the portability of your code). UPDATE: As of TinkerPop 3.6.0, the fold()/coalesce()/unfold() pattern has been largely replaced by the new steps of mergeV() and mergeE() which greatly simplify the Gremlin required to do an upsert-like operation. Under 3.6.0 and newer versions, you would replace the first example with: g.mergeV([(label): 'event', id: '1']) or perhaps better, treat the property key named "id" as an actual vertex identifier of T (I've added the property key of "name" to help with the example): g.mergeV([(label): 'event', (id): '1', name: 'stephen']) The above will search for a vertex with the T.label, T.id and "name" in the Map. If it finds it, it is returned. If it does not find it, the Vertex is created using those values. If you have the T.id it may be even better to do: g.mergeV([(id): '1']). option(onCreate, [(label): 'event', name: 'stephen']) In this way, you limit the search criteria to just the identifier which is enough to uniquely identify it and avoids additional filters and then if the verted is not found the onCreate is triggered to use the supplied Map to create the vertex in conjunction with the search criteria Map.
Gremlin
49,758,417
20
What is the best way to create a bidirectional edge between two vertices using gremlin. Is there a direct command which adds the edge or should we add two edges like vertex X -> Vertex Y and vertex Y -> Vertex X?
You can add an edge between two vertices and then ignore the direction at query-time by using the both() step. This is how you typically address bidirectional edges in Gremlin. Let's open the Gremlin Console and create a simple graph where Alice and Bob are friends: \,,,/ (o o) -----oOOo-(3)-oOOo----- gremlin> graph = TinkerGraph.open() gremlin> g = graph.traversal(standard()) ==>graphtraversalsource[tinkergraph[vertices:0 edges:0], standard] gremlin> ==>null gremlin> g.addV(label, 'user', 'name', 'Alice').as('alice').addV(label, 'user', 'name', 'Bob').as('bob').addE('friendWith').from('alice').to('bob') ==>e[4][0-friendWith->2] This creates a graph with two vertices and one edge: gremlin> g.V() ==>v[0] ==>v[2] gremlin> g.E() ==>e[4][0-friendWith->2] Notice how you cannot traverse from the Bob vertex to the Alice vertex in the outgoing direction, but you can traverse in the ingoing direction (first query yields no result). gremlin> g.V().has('name', 'Bob').out('friendWith') gremlin> g.V().has('name', 'Bob').in('friendWith') ==>v[0] Or starting from Alice (second query yields no result), you get the opposite: gremlin> g.V().has('name', 'Alice').out('friendWith') ==>v[2] gremlin> g.V().has('name', 'Alice').in('friendWith') However, you can traverse the graph in both directions with the both() step, and retrieve Alice's friend or Bob's friend. gremlin> g.V().has('name', 'Alice').both('friendWith') ==>v[2] gremlin> g.V().has('name', 'Bob').both('friendWith') ==>v[0] This would also work on more complex graphs with more than two vertices and one friendship relationship. The both() step simply ignores the direction of the edges when attempting to traverse to adjacent vertices.
Gremlin
40,699,756
18
GREMLIN and SPARQL only define the APIs for graph queries. How do I use the API responses and and plot that as an actual graph, with edges and vertices? Is there something like MySQL Workbench for graphs?
UPDATE: As of Nov 2019, Neptune launched Workbench, which is a Jupyter based visualization for Gremlin and SPARQL. UPDATE: As of Aug 2020, Neptune Workbench extended support for visualizing graph data as nodes and edges in addition to tabular representation that was previously supported. https://aws.amazon.com/about-aws/whats-new/2019/12/amazon-neptune-workbench-provides-in-console-experience-to-query-your-graph/ https://aws.amazon.com/about-aws/whats-new/2020/08/amazon-neptune-announces-graph-visualization-in-neptune-workbench/ Neptune Workbench basically is a Sagemaker instance preconfigured with extensions to help execute Gremlin and SPARQL queries, as well as other Neptune APIs like /loader, /status etc. You can easily create these notebooks from the Neptune console. There are no additional charges for the workbench, apart from the Sagemaker costs incurred by the notebook. These notebooks do support Start and Stop APIs, thereby making it possible for you to enable them only when you need it. A very recent blog post walking you through some of the features: https://aws.amazon.com/blogs/database/visualize-query-results-using-the-amazon-neptune-workbench/ SPARQL: GREMLIN:
Gremlin
52,848,013
16
I am working with groovy (gremlin to traverse a graph database to be exact). Unfortunately, because I am using gremlin, I cannot import new classes. I have some date values that I wish to convert to a Unix timestamp. They are stored as UTC in the format: 2012-11-13 14:00:00:000 I am parsing it using this snippet (in groovy): def newdate = new Date().parse("yyyy-M-d H:m:s:S", '2012-11-13 14:00:00:000') The problem is that it does a timezone conversion, which results in: Tue Nov 13 14:00:00 EST 2012 And if I then convert that to a timestamp using time(), that gets converted to UTC, then the timestamp generated. How do I get new Date() to not do any timezone conversions when the date is first parsed (and just assume the date as UTC)?
Here are two ways to do it in Java: /* * Add the TimeZone info to the end of the date: */ String dateString = "2012-11-13 14:00:00:000"; SimpleDateFormat sdf = new SimpleDateFormat("yyyy-M-d H:m:s:S Z"); Date theDate = sdf.parse(dateString + " UTC"); /* * Use SimpleDateFormat.setTimeZone() */ String dateString = "2012-11-13 14:00:00:000"; SimpleDateFormat sdf = new SimpleDateFormat("yyyy-M-d H:m:s:S"); sdf.setTimeZone(TimeZone.getTimeZone("UTC")); Date theDate = sdf.parse(dateString); Note that Date.parse() is deprecated (so I did not recommend it).
Gremlin
13,390,631
16
I have an array of usernames (eg. ['abc','def','ghi']) to be added under 'user' label in the graph. Now I first want to check if the username already exists (g.V().hasLabel('user').has('username','def')) and then add only those for which the username property doesn't match under 'user' label. Also, can this be done in a single gremlin query or groovy script? I am using titan graph database, tinkerpop3 and gremlin REST server.
With "scripts" you can always pass a multi-line/command script to the server for processing to get what you want done. This question is then answered with normal programming techniques using variables, if/then statements, etc: t = g.V().has('person','name','bill') t.hasNext() ? t.next() : g.addV('person').property('name','bill').next() or perhaps: g.V().has('person','name','bill').tryNext().orElseGet{ g.addV('person').property('name','bill').next()} But these are groovy scripts and ultimately TinkerPop recommends avoiding scripts and closures in favor of a pure traversal. The general way to handle a "get or create" in a single traversal is to do something like this: gremlin> g.V().has('person','name','bill').fold(). ......1> coalesce(unfold(), ......2> addV('person').property('name','bill')) ==>v[18] Also see this StackOverflow question for more information on upsert/"get or create" patterns. UPDATE: As of TinkerPop 3.6.0, the fold()/coalesce()/unfold() pattern has been largely replaced by the new steps of mergeV() and mergeE() which greatly simplify the Gremlin required to do an upsert-like operation. Under 3.6.0 and newer versions, you would write: g.mergeV([(label): 'person', name: 'bill'])
Gremlin
46,027,444
14
Is it possible to search Vertex properties with a contains in Azure Cosmos Graph DB? For example, I would like to find all persons which have 'Jr' in their name? g.V().hasLabel('person').has('name',within('Jr')).values('name') Seems like the within('') function only filters values that are exactly equal to 'Jr'. I am looking for a contains. Ideally case insensitive.
None of the text matching functions are available for CosmosDB at this time. However, I was able to implement a wildcard search functionality by using a UDF (User Defined Function) which uses the Javascript match() function: function userDefinedFunction(input, pattern) { return input.match(pattern) !== null; }; Then you'd have to write your query as SQL and use the UDF that you defined (the example below assumes you called you function 'REGEX' SELECT * FROM c where(udf.REGEX(c.name[0]._value, '.*Jr.*') and c.label='person') The performance will be far from ideal so you need to decide if the solution is acceptable or not based on your latency and cost perspectives.
Gremlin
44,864,203
13
I wanted to model partner relationship like the following, which I expressed in the format of labeled property graph. I wanted to use RDF language to express the above graph, particularly I wanted to understand if I can express the label of the "loves" edge (which is an URI to an article/letter). I am new to RDF, and I know the RDF can easily express the node properties in the LPG, but is it possible to conveniently express the edge properties? A bit more context of this question: the reason I wanted use RDF (rather than Gremlin) is that I wanted to add some reasoning capability in the long run. Further added question: if we choose an RDF model to represent the above LPG, in plain English, I wanted to answer the following questions with SPARQL query: Is Bob in love with any one? If so, who is he in love with and why? How complex would the SPARQL statement be to query out the loveletters.com/123?
RDF doesn't support edge properties, so the brief answer is no. But of course there are ways to model this kind of thing in RDF. Plain RDF triple without edge properties If we didn't want to annotate the edge, the relationship between Bob and Mary would simply be a triple with Bob as Subject, Mary as object, and “loves” as predicate: PREFIX : <http://example.org/ontology#> PREFIX person: <http://example.org/data/person/> person:Bob :loves person:Mary. So how can we add annotations? Option 1: Using RDF Reification RDF has a built-in solution called “RDF reification”. It allows making statements about statements: PREFIX : <http://example.org/ontology#> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX person: <http://example.org/data/person/> PREFIX statement: <http://example.org/data/statement/> person:Bob :loves person:Mary. statement:1 a rdf:Statement; rdf:subject person:Bob; rdf:predicate :loves; rdf:object person:Mary; :reason <http://loveletters.com/123>. So we say that there is a statement with Bob as subject, Mary as object, and “loves” as predicate. Then we can add properties to that statement. The downside is that it is kind of redundant. First we add the “loves” triple, then we add four more triples to replicate the “loves” triple. Option 2: Modelling relationships as entities Another approach is to change the model. Instead of considering “loves” an edge between people, we consider it a node in itself. A node that represents the relationship, and is connected to the two parties involved. PREFIX relationship: <http://example.org/data/relationship/> relationship:1 a :LovesRelationship; :who person:Bob; :whom person:Mary; :reason <http://loveletters.com/123>. So in our model we created a class :LovesRelationship that represents “loves”, and properties :who and :whom to indicate the two parties. The downside of this approach is that the graph structure no longer directly represents our social network. So when querying how two people are related, we always have to go through those relationship entities instead of just dealing with edges connecting people. Option 3: Using RDF* There is a proposal called RDF* that addresses this problem quite nicely. (Sometimes it's called RDR or Reification Done Right.) RDF*/RDR adds new syntax that allows triples to be the subject of other triples: <<person:Bob :loves person:Mary>> :reason <http://loveletters.com/123>. The downside is that it is non-standard and so far supported only by a few systems (Blazegraph, AnzoGraph, and an extension for Jena). As of April 2019, Neptune is not among them. Query: Is Bob in love with anyone? This is easy to do in the basic RDF version as well as in Option 1 and Option 3: ASK { person:Bob :loves ?anyone } Option 2 requires a different query, because of the changed model: ASK { ?rel a :LovesRelationship; :who person:Bob. } This would match any :LovesRelationship where the :who property is Bob, regardless of the :whom and :reason properties. Query: Who is Bob in love with and why? Option 1, RDF Reification: SELECT ?whom ?why { ?statement a rdf:Statement; rdf:subject person:Bob; rdf:predicate :loves; rdf:object ?whom; :reason ?why. } I find this query not very intuitive, because it talks about RDF statements, while we are really interested in people and relationships. Option 2, relationship modelled as entity: SELECT ?whom ?why { ?rel a :LovesRelationship; :who person:Bob; :whom ?whom; :reason ?why. } This is better in my eyes; once you have accepted that relationships are entities in this model, it becomes fairly intuitive. Option 3, RDF*, using SPARQL*: SELECT ?whom ?why { <<person:Bob :loves ?whom>> :reason ?why. } This is concise and intuitive, so it's a shame we can't currently use it in most SPARQL systems!
Gremlin
55,885,944
13
I'm new to Gremlin and just trying to build out a basic graph. I've been able to do a basic addEdge on new vertices, i.e. gremlin> v1 = g.addVertex() ==>v[200004] gremlin> v2 = g.addVertex() ==>v[200008] gremlin> e = g.addEdge(v1, v2, 'edge label') ==>e[4c9f-Q1S-2F0LaTPQN8][200004-edge label->200008] I have also been able to create an edge between vertices looked up by id: gremlin> v1 = g.v(200004) ==>v[200004] gremlin> v2 = g.v(200008) ==>v[200008] gremlin> e = g.addEdge(v1, v2, 'edge label') ==>e[4c9f-Q1S-2F0LaTPQN8][200004-edge label->200008] However, I now want to look up vertices based on multiple properties, which is where it gets tricky. In order to look up the right vertex, I'm making 2 calls to .has. It appears that the correct vertices are found, but adding the edge fails. gremlin> v1 = g.V.has("x",5).has('y",7) ==>v[200004] gremlin> v2 = g.V.has("x",3).has('y",5) ==>v[200008] gremlin> e = g.addEdge(v1, v2, 'edge label') No signature of method: groovy.lang.MissingMethodException.addEdge() is applicable for argument types: () values: [] What's the easiest way to add a simple edge between two existing vertices, based on a property value lookup?
The key issue is that .has returns a Pipe: in order to get the specific vertex instance, a simple call to .next() does the trick: gremlin> v1 = g.V.has("x",5).has('y",7).next() ==>v[200004] gremlin> v2 = g.V.has("x",3).has('y",5).next() ==>v[200008] gremlin> e = g.addEdge(v1, v2, 'edge label') ==>e[4c9f-Q1S-2F0LaTPQN8][200004-edge label->200008] Note that .next() will simply return the next item in the pipe. In this case, any additional vertices matching the property values are ignored.
Gremlin
17,117,683
13
Im using OrientDB type graph. I need syntax of Gremlin for search same SQL LIKE operator LIKE 'search%' or LIKE '%search%' I've check with has and filter (in http://gremlindocs.com/). However it's must determine exact value is passed with type property. I think this is incorrect with logic of search. Thanks for anything.
For Cosmos Db Gremlin support g.V().has('foo', TextP.containing('search')) You can find the documentation Microsoft Gremlin Support docs And TinkerPop Reference
Gremlin
19,085,078
12
I want to query my TITAN 0.4 graph, based on two filter conditions with "OR" logical operator (return vertices if either of conditions is true). I searched this on http://sql2gremlin.com/, but only "AND" operator is given, My requirement is something as given below: SELECT * FROM Products WHERE Discontinued = 1 OR UnitsInStock = 0 g.V('type','product').has('discontinued', true) "OR" .has('unitsInStock', 0) Please help
You could do: g.V('type','product').or(_().has('discontinued', true), _().has('unitsInStock', 0)) See the or step in GremlinDocs.
Gremlin
30,043,747
12
In the Modern graph, I want to get for each person the name and the list of names of software he created. So I tried the following query g.V().hasLabel('person').project('personName','softwareNames'). by(values('name')). by(out('created').values('name').aggregate('a').select('a')) but I get the error The provided traverser does not map to a value: v[2]->[VertexStep(OUT,[created],vertex), PropertiesStep([name],value), AggregateStep(a), SelectOneStep(last,a)] The problem seems to be that vertex 2 has no "created" edges. The query works if I run it only on vertices with at least one "created" edge, e.g. for vertex 4 ("V(4)" instead of "V()"), the result is ==>[personName:josh,softwareNames:[ripple,lop]] How can I get an empty list of software names for vertex 2, instead of the error?
You can simplify your Gremlin to this: gremlin> g.V().hasLabel('person'). ......1> project('personName','softwareNames'). ......2> by('name'). ......3> by(out('created').values('name').fold()) ==>[personName:marko,softwareNames:[lop]] ==>[personName:vadas,softwareNames:[]] ==>[personName:josh,softwareNames:[ripple,lop]] ==>[personName:peter,softwareNames:[lop]] The by() modulator only does a next() on the inner traversal passed to it, so you need to reduce the results yourself - in this case fold() does that and handles the situation where you have an empty result.
Gremlin
49,466,453
12
I want to remove edge between two vertices, so my code in java tinkerpop3 as below private void removeEdgeOfTwoVertices(Vertex fromV, Vertex toV,String edgeLabel,GraphTraversalSource g){ if(g.V(toV).inE(edgeLabel).bothV().hasId(fromV.id()).hasNext()){ List<Edge> edgeList = g.V(toV).inE(edgeLabel).toList(); for (Edge edge:edgeList){ if(edge.outVertex().id().equals(fromV.id())) { TitanGraph().tx(); edge.remove(); TitanGraph().tx().commit(); return;//Remove edge ok, now return. } } } } Is there a simpler way to remove edge between two vertices by a direct query to that edge and remove it? Thank for your help.
Here's an example of how to drop edges between two vertices (where you just have the ids of those vertices: gremlin> graph = TinkerFactory.createModern() ==>tinkergraph[vertices:6 edges:6] gremlin> g = graph.traversal() ==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard] gremlin> g.V(1).bothE() ==>e[9][1-created->3] ==>e[7][1-knows->2] ==>e[8][1-knows->4] For purpose of the example, let's say we want to drop edges between vertex 1 and vertex 2. We could find those with: gremlin> g.V(1).bothE().where(otherV().hasId(2)) ==>e[7][1-knows->2] and then remove it with: gremlin> g.V(1).bothE().where(otherV().hasId(2)).drop() gremlin> g.V(1).bothE() ==>e[9][1-created->3] ==>e[8][1-knows->4] If you have the actual vertices, then you could just do: gremlin> g.V(v1).bothE().where(otherV().is(v2)).drop() gremlin> g.V(1).bothE() ==>e[9][1-created->3] ==>e[8][1-knows->4] You could re-write your function as: private void removeEdgeOfTwoVertices(Vertex fromV, Vertex toV,String edgeLabel,GraphTraversalSource g){ g.V(fromV).bothE().hasLabel(edgeLabel).where(__.otherV().is(toV)).drop().iterate(); g.tx().commit(); }
Gremlin
34,589,215
11
What is the easiest & most efficient way to count the number of nodes/edges in a large graph via Gremlin? The best I have found is using the V iterator: gremlin> g.V.gather{it.size()} However, this is not a viable option for large graphs, per the documentation for V: The vertex iterator for the graph. Utilize this to iterate through all the vertices in the graph. Use with care on large graphs unless used in combination with a key index lookup.
I think the preferred way to do a count of all vertices would be: gremlin> g = TinkerGraphFactory.createTinkerGraph() ==>tinkergraph[vertices:6 edges:6] gremlin> g.V.count() ==>6 gremlin> g.E.count() ==>6 though, I think that on a very large graph g.V/E just breaks down no matter what you do. On a very large graph the best option for doing a count is to use a tool like Faunus(http://thinkaurelius.github.io/faunus/) so that you can leverage the power of Hadoop to do the counts in parallel. UPDATE: The original answer above was for TinkerPop 2.x. For TinkerPop 3.x the answer is largely the same and implies use of Gremlin Spark or some provider specific tooling (like DSE GraphFrames for DataStax Graph) that is optimized to do those kinds of large scale traversals.
Gremlin
17,214,959
11
When user enters data in a text box, many possibilities of SQL Injection are observed. To prevent this, many methods are available to have placeholders in the SQL query, which are replaced in the next step of code by the input. Similarly, how can we prevent Gremlin Injection in C#? Example: The following is a sample code for adding a node in a graph database. The value of variables: name and nodeId is taken from user via a text box. StringBuilder sb = new StringBuilder(); sb.Append("g.addV('" + name + "').property('id','"+nodeId+"')"); /*The following simply executes the gremlin query stored in sb*/ IDocumentQuery<dynamic> query = client.CreateGremlinQuery<dynamic>(graph, sb.ToString()); while (query.HasMoreResults){ foreach (dynamic result in await query.ExecuteNextAsync()) { Console.WriteLine($"\t {JsonConvert.SerializeObject(result)}"); }} A malicious user may write the attributeValue like name: "person" (without quotes) id: "mary');g.V().drop();g.addV('person').property('id', 'thomas" (without quotes) This will clear all the existing nodes and add only one node with the id: thomas How do I prevent this from happening? I don't wish to blacklist characters like ";" or ")" as this is permissible as input for some data. Note: Gremlin is a traversal language used in graph databases: https://tinkerpop.apache.org/gremlin.html https://learn.microsoft.com/en-us/azure/cosmos-db/gremlin-support
The question was originally about Gremlin injections for cases where the Gremlin traversal was sent to the server (e.g., Gremlin Server) in the form of a query script. My original answer for this scenario can be found below (Gremlin Scripts). However, by now Gremlin Language Variants are the dominant way to execute Gremlin traversals which is why I extended my answer for them because it is very different than for the case of simple Gremlin scripts. Gremlin Language Variants Gremlin Language Variants (GLVs) are implementations of Gremlin within different host languages like Python, JavaScript, or C#. This means that instead of sending the traversal as a string to the server like client.SubmitAsync<object>("g.V().count"); it can simply be represented as code in the specific language and then executed with a special terminal step (like next() or iterate()): g.V().Count().Next(); This builds and executes the traversal in C# (it would look basically the same in other languages, just not with the step names in pascal case). The traversal will be converted into Gremlin Bytecode which is the language-independent representation of a Gremlin traversal. This Bytecode will then be serialized to GraphSON to be sent to a server for evaluation: { "@type" : "g:Bytecode", "@value" : { "step" : [ [ "V" ], [ "count" ] ] } } This very simple traversal already shows that GraphSON includes type information, especially since version 2.0 and more so in version 3.0 which is the default version since TinkerPop 3.3.0. There are two interesting GraphSON types for an attacker, namely the already showed Bytecode which can be used to execute Gremlin traversals like g.V().drop to manipulate / remove data from the graph and g:Lambda which can be used to execute arbitrary code1: { "@type" : "g:Lambda", "@value" : { "script" : "{ it.get() }", "language" : "gremlin-groovy", "arguments" : 1 } } However, an attacker would need to add either his own Bytecode or a lambda as an argument to a step that is part of an existing traversal. Since a string would simply be serialized as a string in GraphSON no matter whether it contains something that represents a lambda or Bytecode, it is not possible to inject code into a Gremlin traversal with a GLV this way. The code would simply be treated as a string. The only way this could work is when the attacker would be able to provide a Bytecode or Lambda object directly to the step, but I can't think of any scenario that would allow for this. So, to my best knowledge, injecting code into a Gremlin traversal is not possible when a GLV is used. This is independent of the fact whether bindings are used or not. The following part was the original answer for scenarios where the traversal is sent as a query string to the server: Gremlin Scripts Your example will indeed result in something you could call a Gremlin injection. I tested it with Gremlin.Net, but it should work the same way with any Gremlin driver. Here is the test that demonstrates that the injection actually works: var gremlinServer = new GremlinServer("localhost"); using (var gremlinClient = new GremlinClient(gremlinServer)) { var name = "person"; var nodeId = "mary').next();g.V().drop().iterate();g.V().has('id', 'thomas"; var query = "g.addV('" + name + "').property('id','" + nodeId + "')"; await gremlinClient.SubmitAsync<object>(query); var count = await gremlinClient.SubmitWithSingleResultAsync<long>( "g.V().count().next()"); Assert.NotEqual(0, count); } This test fails because count is 0 which shows that the Gremlin Server executed the g.V().drop().iterate() traversal. Script parameterization Now the official TinkerPop documentation recommends to use script parameterization instead of simply including the parameters directly in the query script like we did in the previous example. While it motivates this recommendation with performance improvements, it also helps to prevent injections by malicious user input. To understand the effect of script parameterization here, we have to take a look at how a request is sent to the Gremlin Server (taken from the Provider Documentation): { "requestId":"1d6d02bd-8e56-421d-9438-3bd6d0079ff1", "op":"eval", "processor":"", "args":{"gremlin":"g.traversal().V(x).out()", "bindings":{"x":1}, "language":"gremlin-groovy"}} As we can see in this JSON representation of a request message, the arguments of a Gremlin script are sent separated from the script itself as bindings. (The argument is named x here and has the value 1.) The important thing here is that the Gremlin Server will only execute the script from the gremlin element and then include the parameters from the bindings element as raw values. A simple test to see that using bindings prevents the injection: var gremlinServer = new GremlinServer("localhost"); using (var gremlinClient = new GremlinClient(gremlinServer)) { var name = "person"; var nodeId = "mary').next();g.V().drop().iterate();g.V().has('id', 'thomas"; var query = "g.addV('" + name + "').property('id', nodeId)"; var arguments = new Dictionary<string, object> { {"nodeId", nodeId} }; await gremlinClient.SubmitAsync<object>(query, arguments); var count = await gremlinClient.SubmitWithSingleResultAsync<long>( "g.V().count().next()"); Assert.NotEqual(0, count); var existQuery = $"g.V().has('{name}', 'id', nodeId).values('id');"; var nodeIdInDb = await gremlinClient.SubmitWithSingleResultAsync<string>(existQuery, arguments); Assert.Equal(nodeId, nodeIdInDb); } This test passes which not only shows that g.V().drop() was not executed (otherwise count would again have the value 0), but it also demonstrates in the last three lines that the injected Gremlin script was simply used as the value of the id property. 1 This arbitrary code execution is actually provider specific. Some providers like Amazon Neptune for example don't support lambdas at all and it is also possible to restrict the code that can be executed with a SandboxExtension for the Gremlin Server, e.g., by blacklisting known problematic methods with the SimpleSandboxExtension or by whitelisting only known unproblematic methods with the FileSandboxExtension.
Gremlin
44,473,303
11
I've been looking at the management system but some things still elude me. Essentially what I want to do is: List all Edge based indexes (including vertex centric). List all Vertex based indexes (on a per label basis if the index is attached to a label). It's basically like mapping out the graph schema. I've tried a few things but I only get partial data at best. g.getIndexdKeys(<Vertex or Edge>); //basic information. Doesn't seem to return any buildEdgeIndex() based indexes mgmt.getVertexLabels(); // gets labels, can't find a way of getting indexes attached to these labels. mgmt.getGraphIndexes(Vertex.class); // works nicely I can retrieve Vertex indexes and get pretty much any // information I want out of them except for information regarding // indexOnly(label). So I can't tell what label these indexes are attached to. mgmt.getGraphIndexes(Edge.class); // doesn't seem to return any buildEdgeIndex() indexes. Any help filling the void would be helpful. Id like to know: How can I find the indexes attached to a label (or label attached to an index) via indexOnly() How do I list the edge indexes set via buildEdgeIndex() and their respective edge Labels? Thanks in advance. Extra information: Titan 0.5.0, Cassandra backend, via rexster.
it's not really straight forward, hence I'm going to show it using an example. Let's start with the Graph of the Gods + an additional index for god names: g = TitanFactory.open("conf/titan-cassandra-es.properties") GraphOfTheGodsFactory.load(g) m = g.getManagementSystem() name = m.getPropertyKey("name") god = m.getVertexLabel("god") m.buildIndex("god-name", Vertex.class).addKey(name).unique().indexOnly(god).buildCompositeIndex() m.commit() Now let's pull the index information out again. gremlin> m = g.getManagementSystem() ==>com.thinkaurelius.titan.graphdb.database.management.ManagementSystem@2f414e82 gremlin> // get the index by its name gremlin> index = m.getGraphIndex("god-name") ==>com.thinkaurelius.titan.graphdb.database.management.TitanGraphIndexWrapper@e4f5395 gremlin> // determine which properties are covered by this index gremlin> gn.getFieldKeys() ==>name // // the following part shows what you're looking for // gremlin> import static com.thinkaurelius.titan.graphdb.types.TypeDefinitionCategory.* gremlin> // get the schema vertex for the index gremlin> sv = m.getSchemaVertex(index) ==>god-name gremlin> // get index constraints gremlin> rel = sv.getRelated(INDEX_SCHEMA_CONSTRAINT, Direction.OUT) ==>com.thinkaurelius.titan.graphdb.types.SchemaSource$Entry@5162bf3 gremlin> // get the first constraint; no need to do a .hasNext() check in this gremlin> // example, since we know that we will only get a single entry gremlin> sse = rel.iterator().next() ==>com.thinkaurelius.titan.graphdb.types.SchemaSource$Entry@5162bf3 gremlin> // finally get the schema type (that's the vertex label that's used in .indexOnly()) gremlin> sse.getSchemaType() ==>god Cheers, Daniel
Gremlin
25,702,461
11
I have following graph: g.addV('TEST').property(id, 't1') g.addV('TEST').property(id, 't2').property('a', 1) If I do: g.V('t2').project('a').by(values('a')) the traversal works and returns map with key a because property is there. But if I have project step in my traversal like following: g.V('t1').project('a').by(values('a')) Because a is missing it returns error, is there any way to return null or empty value in such case from by() step to avoid this error?
You can use coalesce(): gremlin> g.V().project('a').by(coalesce(values('a'),constant('default'))) ==>[a:default] ==>[a:1]
Gremlin
56,669,894
10
I'm using cosmos graph db in azure. Does anyone know if there is a way to add an edge between two vertex only if it doesn't exist (using gremlin graph query)? I can do that when adding a vertex, but not with edges. I took the code to do so from here: g.Inject(0).coalesce(__.V().has('id', 'idOne'), addV('User').property('id', 'idOne')) Thanks!
It is possible to do with edges. The pattern is conceptually the same as vertices and centers around coalesce(). Using the "modern" TinkerPop toy graph to demonstrate: gremlin> g.V().has('person','name','vadas').as('v'). V().has('software','name','ripple'). coalesce(__.inE('created').where(outV().as('v')), addE('created').from('v').property('weight',0.5)) ==>e[13][2-created->5] Here we add an edge between "vadas" and "ripple" but only if it doesn't exist already. the key here is the check in the first argument to coalesce(). UPDATE: As of TinkerPop 3.6.0, the fold()/coalesce()/unfold() pattern has been largely [replaced by the new steps][3] of mergeV() and mergeE() which greatly simplify the Gremlin required to do an upsert-like operation. Under 3.6.0 and newer versions, you would write: gremlin> g.V().has('person','name','vadas').as('v2'). ......1> V().has('software','name','ripple').as('v5'). ......2> mergeE([(from):outV, (to): inV, label: 'created']). ......3> option(onCreate, [weight: 0.5]). ......4> option(outV, select('v2')). ......5> option(inV, select('v5')) ==>e[13][2-edge->5] You could also do this with id value if you know them which makes it even easier: gremlin> g.mergeE([(from): 2, (to): 5, label: 'created']). ......1> option(onCreate, [weight: 0.5]) ==>e[13][2-edge->5]
Gremlin
52,447,308
10
The goal I have a simple enough task to accomplish: Set the weight of a specific edge property. Take this scenario as an example: What I would like to do is update the value of weight. Additional Requirements If the edge does not exist, it should be created. There may only exist at most one edge of the same type between the two nodes (i.e., there can't be multiple "votes_for" edges of type "eat" between Joey and Pizza. The task should be solved using the Java API of Titan (which includes Gremlin as part of TinkerPop 3). What I know I have the following information: The Vertex labeled "user" The edge label votes_for The value of the edge property type (in this case, "eat") The value of the property name of the vertex labeled "meal" (in this case "pizza"), and hence also its Vertex. What I thought of I figured I would need to do something like the following: Start at the Joey vertex Find all outgoing edges (which should be at most 1) labeled votes_for having type "eat" and an outgoing vertex labeled "meal" having name "pizza". Update the weight value of the edge. This is what I've messed around with in code: //vertex is Joey in this case g.V(vertex.id()) .outE("votes_for") .has("type", "eat") //... how do I filter by .outV so that I can check for "pizza"? .property(Cardinality.single, "weight", 0.99); //... what do I do when the edge doesn't exist? As commented in code there are still issues. Would explicitly specifying a Titan schema help? Are there any helper/utility methods I don't know of? Would it make more sense to have several vote_for labels instead of one label + type property, like vote_for_eat? Thanks for any help!
You are on the right track. Check out the vertex steps documentation. Label the edge, then traverse from the edge to the vertex to check, then jump back to the edge to update the property. g.V(vertex.id()). outE("votes_for").has("type", "eat").as("e"). inV().has("name", "pizza"). select("e").property("weight", 0.99d). iterate() Full Gremlin console session: gremlin> Titan.version() ==>1.0.0 gremlin> Gremlin.version() ==>3.0.1-incubating gremlin> graph = TitanFactory.open('inmemory'); g = graph.traversal() ==>graphtraversalsource[standardtitangraph[inmemory:[127.0.0.1]], standard] gremlin> vertex = graph.addVertex(T.label, 'user', 'given_name', 'Joey', 'family_name', 'Tribbiani') ==>v[4200] gremlin> pizza = graph.addVertex(T.label, 'meal', 'name', 'pizza') ==>v[4104] gremlin> votes = vertex.addEdge('votes_for', pizza, 'type', 'eat', 'weight', 0.8d) ==>e[1zh-38o-4r9-360][4200-votes_for->4104] gremlin> g.E(votes).valueMap(true) ==>[label:votes_for, weight:0.8, id:2rx-38o-4r9-360, type:eat] gremlin> g.V(vertex.id()).outE('votes_for').has('type','eat').as('e').inV().has('name','pizza').select('e').property('weight', 0.99d).iterate(); g.E(votes).valueMap(true) ==>[label:votes_for, weight:0.99, id:2rx-38o-4r9-360, type:eat] Would explicitly specifying a Titan schema help? If you wanted to start from the Joey node without having a reference to the vertex or its id, this would be a good use case for a Titan composite index. The traversal would start with: g.V().has("given_name", "Joey") Are there any helper/utility methods I don't know of? In addition to the TinkerPop reference documentation, there are several tutorials that you can read through: Getting Started The Gremlin Console Recipes Would it make more sense to have several vote_for labels instead of one label + type property, like vote_for_eat? Depends on what your graph model or query patterns are, but more granular labels like vote_for_eat can work out fine. You can pass multiple edge labels on the traversal step: g.V(vertex.id()).outE('vote_for_eat', 'vote_for_play', 'vote_for_sleep') Update There may only exist at most one edge of the same type between the two nodes You can use the Titan schema to help with this, specifically define an edge label with multiplicity ONE2ONE. An exception will be thrown if you create more than one votes_for_eat between Joey and pizza.
Gremlin
40,746,954
10
I have a tree with many levels, where leaf nodes might have property "count". I want to calculate total count for each sub-tree, and cache those values in the root node of each sub-tree. Is that possible in Gremlin?
You could do it with a sideEffect - that's pretty straightforward. We setup a simple tree with: gremlin> g = new TinkerGraph() ==>tinkergraph[vertices:0 edges:0] gremlin> v1 = g.addVertex() ==>v[0] gremlin> v2 = g.addVertex() ==>v[1] gremlin> v3 = g.addVertex([count:2]) ==>v[2] gremlin> v4 = g.addVertex([count:3]) ==>v[3] gremlin> v1.addEdge('child',v2) ==>e[4][0-child->1] gremlin> v1.addEdge('child',v3) ==>e[5][0-child->2] gremli gremlin> v2.addEdge('child',v4) ==>e[6][1-child->3] And then here's the calculation over each subtree within the full tree: gremlin> g.V().filter{it.outE().hasNext()}.sideEffect{ gremlin> c=0; gremlin> it.as('a').out().sideEffect{leaf -> c+=(leaf.getProperty('count')?:0)}.loop('a'){true}.iterate() gremlin> it.setProperty('total',c) gremlin> } ==>v[0] ==>v[1] gremlin> g.v(0).total ==>5 gremlin> g.v(1).total ==>3 That query breaks down like this. First, this piece: g.V().filter{it.outE().hasNext()} gets any portion of the tree that is not a leaf node (i.e. should have at least one outgoing edge to not be a leaf). Second, we use sideEffect to process each root of a subtree: it.as('a').out().sideEffect{leaf -> c+=(leaf.getProperty('count')?:0)}.loop('a'){true}.iterate() storing the sum of the "count" property for each subtree in a variable called c. There's a bit of groovy goodness there with the elvis operator (?:) to check for vertices without a "count" property and return a zero in those cases. After you traverse the tree to calculate c you can just store the value of c in your root node of the subtree via: it.setProperty('total',c)
Gremlin
32,660,891
10
We're currently having some trouble with the bitbucket branch source plugin used to handle a multibranch test job in one of our Jenkins instances (productive instance): Any job related to a deleted branch is not getting deleted in Jenkins. Is is shown as disabled. Checking the Scan Multibranch Pipeline Log I find the following entries: Will not remove foobranch because it is new Will not remove PR-1 because it is new Will not remove bar because it is new Will not remove freeDiskSpaceHack because it is new We have another instance (test instance) where everything is working as expected - branches get removed immediately, e.g. seeing the following in the log: Will remove freeDiskSpaceHack Will remove foo For both instances we're using the same Jenkins version (2.212.2) and plugin versions. The jobs in both instances use the same settings for the Bitbucket branch source plugin: There's one difference: Both jobs use a different repository in bitbucket; the onee of our test instance (where jobs get deleted) is a fork of the other one. Besides that there's no difference. My questions are: Why doesn't it work for our productive instance? Is there some secret setting? What does the log want to tell me saying: Will not remove <branch> because it is new. Hopefully anyone has a clue.
Finally I found the hidded switch by myself. Feeling little bit stupid, though. In the job configuration you can specify for how long to keep old items. When setting up this job initially I must have mixed up this setting with the setting which tells jenkins for how long to keep old builds. So it was set to 30 days. Btw.: The number of builds kept for the individual branches are not affected by this setting...: Orphaned Item Strategy (how it looked like) Orphaned Item Strategy (how it should have looked like) However to immeditaly get rid of orphaned branches one must not enter a number there, like:
Jenkins
51,734,259
39
I want to use scp/ssh to upload some files to a server. I discover that I need to use certificate-based authentication, but the question is how? Really what I want to do is to use the same sort of credentials I use with git - passworded ssh cert stored in Jenkins. However, I can't work out how to - the snippet generator has no obvious option for that. What do others do? Is there an undocumented feature that would do this?
withCredentials([sshUserPrivateKey(credentialsId: "yourkeyid", keyFileVariable: 'keyfile')]) { stage('scp-f/b') { sh 'scp -i ${keyfile} do sth here' } } Maybe this is what you want. Install Credentials Plugin and Credentials Binding Plugin. Add some credentials and then you will get "yourkeyid", bind this credentials to keyFileVariable, passphraseVariable etc. More details and documentation can be found on the Github site of the Jenkins Credentials Binding Plugin, Credentials Binding Plugin docs, Credentials Plugin, SSH Pipeline Steps plugin
Jenkins
44,377,238
39
I want to know if there is a function or pipeline plugin that allows to create directory under the workspace instead of using sh "mkdir directory"? I've tried to use a groovy instruction new File("directory").mkdirs() but it always return an exception. org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use new java.lang.RuntimeException java.lang.String
What you can do is use the dir step, if the directory doesn't exist, then the dir step will create the folders needed once you write a file or similar: node { sh 'ls -l' dir ('foo') { writeFile file:'dummy', text:'' } sh 'ls -l' } The sh steps is just there to show that the folder is created. The downside is that you will have a dummy file in the folder (the dummy write is not necessary if you're going to write other files). If I run this I get the following output: Started by user jon [Pipeline] node Running on master in /var/lib/jenkins/workspace/pl [Pipeline] { [Pipeline] sh [pl] Running shell script + ls -l total 0 [Pipeline] dir Running in /var/lib/jenkins/workspace/pl/foo [Pipeline] { [Pipeline] writeFile [Pipeline] } [Pipeline] // dir [Pipeline] sh [pl] Running shell script + ls -l total 4 drwxr-xr-x 2 jenkins jenkins 4096 Mar 7 22:06 foo [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline Finished: SUCCESS
Jenkins
42,654,875
39
I am running Jenkins and Docker on a CentOS machine. I have a Jenkins job that pulls a Github repo and builds a Docker image. When I try running the job I get the error: + docker build -t myProject . Cannot connect to the Docker daemon. Is the docker daemon running on this host? Build step 'Execute shell' marked build as failure Finished: FAILURE This problem occurs even though I have added jenkins to my docker usergroup via sudo usermod -aG docker jenkins and restarted my machine. How do I fix this? By the way, if try changing the command to sudo docker build -t myProject . I just get the error sudo: sorry, you must have a tty to run sudo
After the installation of Jenkins and Docker. Add jenkins user to dockergroup (like you did) sudo gpasswd -a jenkins docker Edit the following file vi /usr/lib/systemd/system/docker.service And edit this rule to expose the API : ExecStart=/usr/bin/docker daemon -H unix:// -H tcp://localhost:2375 Do not create a new line with ExecStart, simply add the commands at the end of the existing line. Now it's time to reload and restart your Docker daemon systemctl daemon-reload systemctl restart docker Then restart jenkins, you should be able to perform docker commands as jenkins user in your jenkins jobs sudo service jenkins restart
Jenkins
38,105,308
39
Is there any method or plugins available to retrieve deleted Jenkins job? I have mistakenly deleted one job from Jenkins. So please give a suggestion to undo the delete.
In fact you have just to use this url to reactivate your deleted job : http://[url_of_jenkins]/jobConfigHistory/?filter=deleted You just need to have this plugin installed beforehand: https://wiki.jenkins.io/display/JENKINS/JobConfigHistory+Plugin
Jenkins
31,806,108
39
Recently my jenkins.log has started getting very large, very quickly, full of exceptions about DNS resolution. I attempted to use logrotate, but the log file grows too quickly even to be rotated, and just eats up all my disk space, which then causes various services to fail because they cannot write files anymore. How do I avoid that?
You can disable the logging of these DNS errors by adjusting the logging settings within Jenkins. From the Jenkins web interface go to: Manage Jenkins -> System Log -> Log Levels (on the left) Add the following entry: Name: javax.jmdns Level: off This way you can keep the Multicast DNS feature but without all the logging data.
Jenkins
31,719,756
39
I want to set up Sonar with Jenkins. But I'm not sure if the Sonar site describes two different ways to do this or if there are two necessary steps: As far as I understood it, it's two different ways. If this is the case, what is the difference and what are the advantages and disadvantages (between the Sonar itself and Sonar runner)?
If you want to analyse a project with SonarQube and Jenkins, here's what you need: A SonarQube server up and running A Jenkins server up and running with the SonarQube Scanner for Jenkins installed and configure to point to your SonarQube server A job configured to run a SonarQube analysis on your project: Using the default and standard SonarQube Scanner (suitable for most projects) Using the SonarQube Scanner for MSBuild (for .NET solutions) Using a post build action for Maven-based projects Everything is described more in details on the SonarQube Scanner for Jenkins documentation page.
Jenkins
13,472,283
39
I've been pulling my hair out trying to find a way to include the list of changes generated by Jenkins (from the SVN pull) into our Testflight notes. I'm using the Testflight Plugin, which has a field for notes - but there doesn't seem to be any paramater/token that jenkins creates to embed that information. Has anyone had any luck accomplishing something like this?
It looks like the TestFlight Plugin expands variables placed into the "Build Notes" field, so the question is: how can we get the changes for the current build into an environment variable? As far as I'm aware, the Subversion plugin doesn't provide this information via an environment variable. However, all Jenkins SCM plugins integrate changelog information, as you can see via the "Changes" link in the web UI for each build. This information is also available via the Jenkins API, even while a build is in progress. For example, if you add an "Execute shell" build step where you run this command: curl -s "http://jenkins/job/my-job/$BUILD_NUMBER/api/xml?wrapper=changes&xpath=//changeSet//comment" You'll get an XML document similar to this: <changes> <comment>First commit.</comment> <comment>Second commit.</comment> </changes> You can then format this information however you like and place it into an environment variable which can then be referenced in the TestFlight "Build Notes" section. However, setting an environment variable in a build step isn't persistent by default — to do so requires using the EnvInject Plugin. In this case, you could write your changelog text to a temporary file with contents like: CHANGELOG="New in this build:\n- First commit.\n- Second commit." Then, by using a build step with the Environment Properties File Path option to load this file, the $CHANGELOG variable would exist in your environment and persist until the end of the build, allowing you to use it in the "Build Notes" field. Note: I haven't used the TestFlight plugin myself (though I took a quick look at the code), and I only tested the changes API with a Git repository. Similarly, I didn't test how newlines should be encoded with the EnvInject plugin, so that could cause issues.
Jenkins
11,823,826
39
I've seen a number of posts on making a Maven-backed Jenkins build fail for a given project if a coverage threshold isn't met i.e. coverage must be at least 80% or the build fails. I'm wondering if there is a way to configure Jenkins to fail a build if the coverage is lower than the last build i.e. if the coverage for build N is 20%, and N+1 is 19%, then the build fails. I don't want to put in an explicit threshold, but I want the coverage to stay steady or get higher over time.
I haven't tried it, but assuming you are using maven cobertura plugin, I believe it can be configured to fail as documented here. Would jenkins not honour the failure? I also see an open feature request for this.
Jenkins
6,486,054
39
Is there a plugin which would allow me to create a "trend" graph for a hudson build which shows the build time for that project? I'm tasked with speeding up the build and I'd like to show a nice trend as I speed it up.
This is supported out of the box: http://SERVER/hudson/job/JOBNAME/buildTimeTrend
Jenkins
1,772,633
39
I am trying to understand the difference between the two options “Wipe out repository and force clone” and “Clean before checkout” for pulling a git repo. Looking at the help section for both options, both seem to have similar functionality and I can't make out the difference. Here's how they look: Wipe out repository & force clone: Delete the contents of the workspace before building, ensuring a fully fresh workspace. Clean before checkout Clean up the workspace before every checkout by deleting all untracked files and directories, including those which are specified in .gitignore. It also resets all tracked files to their versioned state. This ensures that the workspace is in the same state as if you cloned and checked out in a brand-new empty directory, and ensures that your build is not affected by the files generated by the previous build. I couldn't find any comparison between the two options; neither in Jenkins/GitPlugin wiki, nor in stack overflow, and not even in google. We currently have both options, but we are planning to reduce build time by removing the “Wipe out repository and force clone” option. But I don't want to break any functionality while doing this. Please explain the difference if you're sure. Thanks in advance :)
Wipe out repository & force clone will clean the entire project workspace and clone the project once again before building. It could be time consuming depends on the project size. If the project is 1GB, it downloads 1GB everytime you build it. Clean before checkout removes the files created as part of build - say your test results etc - resets the files if they were updated & pulls the latest changes if they have been updated. This ensures that the workspace is in the same state as if you cloned and checked out in a brand-new empty directory. It downloads only the delta which could be few MBs. So it is less time consuming. So you can go ahead use Clean before checkout without affecting the build. Have been using this option for more than 4 years without any issues.
Jenkins
42,305,565
38
I have an option (a plugin?) called "Poll SCM" in Jenkins and I know what it does and how to use it. Please tell me what "SCM" stands for. Is it "Source Code Management", "Sync Code [something]"? Thanks.
SCM = Source Control Management From jenkin's tutorial Section: Create your Pipeline project in Jenkins Step 6: From the Definition field, choose the Pipeline script from SCM option. This option instructs Jenkins to obtain your Pipeline from Source Control Management (SCM), which will be your locally cloned Git repository.
Jenkins
41,758,231
38
I have just installed Jenkins on my RHEL 6.0 server via npm: npm -ivh jenkins-2.7.2-1.1.noarch.rpm I have also configured my port to be 9917 to avoid clashing with my Tomcat server, allowing me to access the Jenkins page at ipaddress:9917. After entering the initial admin password at the Unlock Jenkins page, I am presented with a blank page, with the header "SetupWizard [Jenkins]". Anyone knows why am I getting a blank page, and how do I solve it?
I experienced the same problem but a simple restart fixed it . Just try this :- sudo service jenkins stop sudo service jenkins start
Jenkins
38,966,105
38
I'm trying to use DSL pipelines in Jenkins. I thought it'd be nice if I could use the project name as part of my script. git credentialsId: 'ffffffff-ffff-ffff-ffff-ffffffffffffff',\ url: "${repo_root}/${JOB_NAME}.git" I get the error: groovy.lang.MissingPropertyException: \ No such property: JOB_NAME for class: groovy.lang.Binding I thought I followed these directions, and they mention JOB_NAME as one of the variables. I decided to try: sh 'env' in my DSL, and this prints out: JOB_NAME = foo-bar which is what I expect. Another blog mentions: Usage of environment variables We have two ways to get their value. The properties passed by -D= during the startup we could read as System.getProperty("key") thanks to the Groovy's strong relation with Java. Reading normal environment variables in Java way is the System.getenv("VARIABLE")... Let's try this: println "JOB_NAME = " + System.getenv('JOB_NAME'); Now, I get: java.lang.NullPointerException: Cannot get property 'System' on null object Null object? But, I can see that JOB_NAME is an environment variable! How do I read in the $JOB_NAME into a DSL script in a Pipeline job. I am trying a Pipeline job, and when I get that working will make this a Multibranch Pipeline with a Jenkinsfile.
All environment variables are accessible using env, e.g. ${env.JOB_NAME}.
Jenkins
38,599,641
38
I'm confused about Jenkins Content Security Policy. I know these sites: Configuring Content Security Policy Content Security Policy Reference I have a html page shown via Jenkins Clover Plugin. This html page uses inline style, e.g.: <div class='greenbar' style='width:58px'> The div-element visualizes a progressbar. Using the default Jenkins CSP configuration leads to the following result: Progressbar_FAIL The result i want to have looks like this: Progressbar_WORKS I tried to relax the CSP rules, adding different combinations of parameters (script-src, style-src) with different levels (self, unsafe-inline,..) but nothing works. So my questions for now: Where do i have to specify the CSP configuration? Is it possible to use inline styles? Where should the styles be located? My css-stylesheets are located local on the Jenkins Server. What is the best way to get inline style and CSP rules "satisfied" Update 1. Try: -Dhudson.model.DirectoryBrowserSupport.CSP="default-src 'self' in the jenkins.xml file. Then the following error occurs: Refused to apply inline style because it violates the following Content Security Policy directive: "default-src 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-'), or a nonce ('nonce-...') is required to enable inline execution. Note also that 'style-src' was not explicitly set, so 'default-src' is used as a fallback. 2. Try -Dhudson.model.DirectoryBrowserSupport.CSP="default-src 'self'; style-src 'self' in the jenkins.xml file. Then the following error occurs: Refused to apply inline style because it violates the following Content Security Policy directive: "style-src 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-'), or a nonce ('nonce-...') is required to enable inline execution I understand that this try can not solve my problem, because default-src includes style-src 3. Try -Dhudson.model.DirectoryBrowserSupport.CSP="default-src 'self'; style-src 'unsafe-inline' in the jenkins.xml file. Then the following error occurs: Refused to load the stylesheet s://jenkins/andsomedir/stylesheet.css [its https://... not allowed to post more than two links :(] because it violates the following Content Security Policy directive: "style-src 'unsafe-inline'".
While experimenting, I recommend using the Script Console to adjust the CSP parameter dynamically as described on the Configuring Content Security Policy page. (There's another note in the Jenkins wiki page that indicates you may need to Force Reload the page to see the new settings.) In order to use both inline styles and local stylesheets, you need to add both self and unsafe-inline: System.setProperty("hudson.model.DirectoryBrowserSupport.CSP", "default-src 'self'; style-src 'self' 'unsafe-inline';") Depending on how the progressbar is manipulated, you may need to adjust 'script-src' in the same way as well. Once you find a setting that works, you can adjust the Jenkins startup script to add the CSP parameter definition.
Jenkins
37,618,892
38
I want to run a Jenkins instance in a docker container. I want Jenkins itself to be able to spin up docker containers as slaves to run tests in. It seems the best way to do this is to use docker run -v /var/run.docker.sock:/var/run/docker.sock -p 8080:8080 -ti my-jenkins-image source The Dockerfile I'm using is FROM jenkins COPY plugins.txt /usr/share/jenkins/plugins.txt RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt USER root RUN apt-get update && apt-get install -y docker.io RUN usermod -aG docker jenkins USER jenkins If I start a bash session in my running container and run docker info on my image I get $ docker info FATA[0000] Get http:///var/run/docker.sock/v1.18/info: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS? And if I run the bash session as root docker exec -u 0 -ti cocky_mccarthy bash root@5dbd0efad2b0:/# docker info Containers: 42 Images: 50 ... So I guess the docker group I'm adding the Jenkins user to is the group for the internal docker hence the socket is not readable without sudo. That's kind of a problem as the Jenkins docker plugin etc are not set up to use sudo. How can I mount the socket so it can be used from the image without sudo?
A bit late, but this might help other users who are struggling with the same problem: The problem here is that the docker group on your docker host has a different group id from the id of the docker group inside your container. Since the daemon only cares about the id and not about the name of the group your solution will only work if these id's match by accident. The way to solve this is by either using tcp instead of using a unix socket by using the -H option when starting Docker engine. You should be very careful with this, as this allows anyone who has access to this port to gain root access to your system. A more secure way of fixing this is making sure that the docker group inside the container ends up having the same group id as the docker group outside of the container. You can do this using build arguments for your docker build: Dockerfile: FROM jenkinsci ARG DOCKER_GROUP_ID USER root RUN curl -o /root/docker.tgz https://get.docker.com/builds/Linux/x86_64/docker-1.12.5.tgz && tar -C /root -xvf /root/docker.tgz && mv /root/docker/docker /usr/local/bin/docker && rm -rf /root/docker* RUN curl -L https://github.com/docker/compose/releases/download/1.7.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose && chmod +x /usr/local/bin/docker-compose RUN groupadd -g $DOCKER_GROUP_ID docker && gpasswd -a jenkins docker USER jenkins And then building it using docker build \ --build-arg DOCKER_GROUP_ID=`getent group docker | \ cut -d: -f3` -t my-jenkins-image . After this you can run your image and have docker access as non-root docker run \ -v /var/run/docker.sock:/var/run/docker.sock \ -p 8080:8080 \ -ti my-jenkins-image Because this solution depends on supplying the correct group id to the docker daemon when the image is being built, this image would need to be built on the machine(s) where it is being used. If you build the image, push it and someone else pulls it on their machine, chances are that the group id's won't match again.
Jenkins
36,185,035
38
I seem to be having issues with integrating Xcode6 with jenkins, I currently have this setup and working with Xcode 5. With xcode 6 running remotely via SSH the simulator time-out, when I run locally it succeeds. Command xcodebuild -workspace PROJECTNAME.xcworkspace -scheme BGO_Tests -destination 'platform=iOS Simulator,name=iPhone 5s' -derivedDataPath ./Build clean test 2014-08-19 10:46:36.591 xcodebuild[33966:381f] iPhoneSimulator: Timed out waiting 120 seconds for >simulator to boot, current state is 1. Testing failed: Test target BGO_Tests encountered an error (Timed out waiting 120 seconds for simulator to boot, current state is 1 Tested with recent Xcode 6 beta 6
Note: the device names changed in Xcode 7, so you no longer specify them using iPhone 5 (9.1 Simulator) but rather iPhone 5 (9.1). Use xcrun instruments -s to get the current list of devices and then you can pre-launch it using: xcrun instruments -w "iPhone 5 (9.1)" || echo "(Pre)Launched the simulator." Prelaunching I got to a point where what I proposed down there wasn't working anymore. In addition to making the changes mentioned here, you need to launch the simulator xcodebuild is expecting BEFORE xcodebuild is ran: # First get the UDID you need xcrun instruments -s # Then launch it open -a "iOS Simulator" --args -CurrentDeviceUDID <sim device UDID> # and wait some time.... sleep 5 # Then launch your unit tests xcodebuild [...] -destination 'platform=iOS Simulator,name=<device name matching the UDID>' Old post This bug is fixed in Xcode 6.3 and above. If you are experiencing similar problems in newer Xcode, it's likely another bug. Apple follow up regarding Bug ID# 18001199: The context provided by LaunchDaemons is not supported for running GUI applications. The SSH service, and the default setup for Jenkins, are both implemented as LaunchDaemons. In earlier versions of Xcode 5 xcodebuild could run tests on the iOS simulator in this context, but that was never a supported configuration, and as you have noted that is no longer working as of Xcode 6. Unlike LaunchDaemons, LaunchAgents provide a context where you can run GUI applications - if the user is logged in at the time, with a window server / Aqua session. Converting your Jenkins configuration from being a LaunchDaemon to being a LaunchAgent would avoid the reported issue. You can also use launchd for running tests on the iOS simulator from a SSH session, either by crafting a LaunchAgent and manually loading / starting that, or by using "launchctl submit”. Ok, after some more digging around the comments around here (many thanks to Opal), I found out that launching the slave via JNLP instead works. As many people mentioned, it is not currently possible to run the unit test over SSH, so you might want to turn towards the JNLP agent for now until Apple fixes it. If connecting with JNLP still does not solve it, try the solution mentioned in this comment. i.e.: Run these on command line: DevToolsSecurity -enable sudo dscl . -append /Groups/_developer GroupMembership "user-that-runs-the-sim" security authorizationdb write system.privilege.taskport is-developer See References here and here. I've recently found out that if you install a new version of Xcode and do not launch it. The simulator might start timing out again. To solve this, I've had to manually launch Xcode, and install the additional tools it requested.
Jenkins
25,380,365
38
The website for the plugin says that you can create a groovy script to run to determine the parameter list. how is this resolved though? The instructions don't say anything. In what context is the script run? What am i supposed to return from the script? What directory is the cwd of the script? is it the environment variable WORKSPACE? there is an extra field called variable bindings. How is this used?
I had to dig into the source code to find the answer to these questions so i hope this helps everyone else. 1. In what context is the script run? The script is run inside a groovy.lang.GroovyShell. This class is currently from the Groovy 1.8.5 library. here is an excerpt from the code: // line 419 - 443 of the ExtendedChoiceParamaterDefinition else if(!StringUtils.isBlank(groovyScript)) { try { GroovyShell groovyShell = new GroovyShell(); setBindings(groovyShell, bindings); Object groovyValue = groovyShell.evaluate(groovyScript); String processedGroovyValue = processGroovyValue(isDefault, groovyValue); return processedGroovyValue; } catch(Exception e) { } } else if(!StringUtils.isBlank(groovyScriptFile)) { try { GroovyShell groovyShell = new GroovyShell(); setBindings(groovyShell, bindings); groovyScript = Util.loadFile(new File(groovyScriptFile)); Object groovyValue = groovyShell.evaluate(groovyScript); String processedGroovyValue = processGroovyValue(isDefault, groovyValue); return processedGroovyValue; } catch(Exception e) { } } 2. What am i supposed to return from the script? As the above code demonstrates, the script should return a string with whatever delimiter you have specified in the paramater or a String[] array. here is a snippet of the function that processes the value returned from the script: // line 450 - 465 of ExtendedChoiceParameterDefinition private String processGroovyValue(boolean isDefault, Object groovyValue) { String value = null; if(groovyValue instanceof String[]) { String[] groovyValues = (String[])groovyValue; if(!isDefault) { value = StringUtils.join((String[])groovyValue, multiSelectDelimiter); } else if(groovyValues.length > 0) { value = groovyValues[0]; } } else if(groovyValue instanceof String) { value = (String)groovyValue; } return value; } 3. What directory is the cwd of the script? is it the environment variable WORKSPACE? Does it matter? You can access the environment variable WORKSPACE from within the script using Map<String, String> props = System.getenv(); def currentDir = props.get('WORKSPACE'); 4. there is an extra field called variable bindings. How is this used? This is a property file formatted key=value file. these names are then resolvable in the groovy script. e.g. key1=foo prop2=bar
Jenkins
24,730,186
38
I am new to Jenkins and I want to know how it is possible to display the HTML report (not the HTML code) generated after a successful build inside a mail body (not as an attachment). I want to know the exact steps I should follow and what should be the content of my possible jelly template.
Look deeper into the plugin documentations. No need for groovy here. Just make sure Content Type is set to HTML and add the following to the body: ${FILE,path="my.html"} This will place the my.html content in your email body (location of file is relative to job's workspace. I use it and it works well. I hope this helps. EDIT: Note that you must have the Jenkins version 1.532.1 (or higher) to support this feature with the email-ext plugin.
Jenkins
22,066,936
38
I'am using Jenkins now, and sometimes the build jobs stucked and with red progress bar. I am really confused how jenkins determine the color of the progress bar. When it is blue? and when it become red? Does anyone have ideas?
The progress bar is normally empty, filling with blue while a build in progress. The amount of time it takes the progress bar to fill is based on the estimated job duration. This estimate is generally based on the average duration of the last few successful builds. If there is no previous job data upon which to make a time estimation, the progress bar shows a stripy-blue animation. From the build progress bar definition (as Jenkins 1.560), we can see that the property red becomes set when the build executor "is likely stuck" — i.e. it is taking significantly longer than the estimated time to complete. Looking at the progressBar tag implementation, setting the red property causes the table.progress-bar.red CSS property to be applied to the bar. In the Executor source code, we see that Jenkins defines "stuck" as the build taking ten times longer than the estimate. If there is no estimate, any build taking longer than 24 hours is considered stuck.
Jenkins
19,653,757
38
We have a lot of developers creating feature branches that I would like to build. Nightly we run a code quality tool that needs to run on every branch. I also would not like a static configuration because the number of branches changes every few weeks.
In Git configuration there is a field 'Branch Specifier (blank for default): ' if you put there ** it will build all branches from all remotes. having that you can use an environment variable ${GIT_BRANCH} e.g. to set a title for the build using https://wiki.jenkins-ci.org/display/JENKINS/Build+Name+Setter+Plugin or for other purposes
Jenkins
15,344,955
38
I am getting an error when inputting my repo location into the "Source Code Management > Git > Repository URL" section of a new Job. I have searched all around and tried many different URLs with no success. Error: Failed to connect to repository : Error performing command: git ls-remote -h https://github.com/micdoodle8/Crossbow_Mod_2.git HEAD Any ideas? Thanks.
You might need to set the path to your git executable in Manage Jenkins -> Configure System -> Git -> Git Installations -> Path to Git executable. For example, I was getting the same error in Windows. I had installed git with chocolatey, and got the location via Powershell: Get-Command git.exe | Select Definition In Unix, you should be able to do: which git
Jenkins
12,681,308
38
I have a .net application built on .net framework 3.5, I am trying to build this application on Jenkins CI server. I've added MSBuild plugin and and have added path to the .exe file of 2.0, 3.5 and 4.0 versions of MSBuild. But my building processes are failing by showing the below error message. Path To MSBuild.exe: msbuild.exe Executing command: cmd.exe /C msbuild.exe Neo.sln && exit %%ERRORLEVEL%% [Test project] $ cmd.exe /C msbuild.exe Neo.sln && exit %%ERRORLEVEL%% 'msbuild.exe' is not recognized as an internal or external command, operable program or batch file. Build step 'Build a Visual Studio project or solution using MSBuild.' marked uild as failure Finished: FAILURE Could anyone plz help me out..??
To make the MSBuild plugin work, you need to configure the plugin in the Jenkins management screen. NOTE: in the newer Jenkins versions you find the MSBuild configuration in the Global Tool Configuration: Note the "Name" field, where I've called this particular configuration v4.0.30319. You could call it anything you like, but ideally the name will somehow refer to the version. You'll need to refer to this name later in the Jenkins PROJECT that's failing. Note: The yellow warning implies that the Path to MSBuild field should be populated with a directory name rather than a file name. In practice you do need to enter the filename here too (ie. msbuild.exe) or the build step will fail. In the Jenkins project that's failing, go to the MSBuild build step. The first field in the build step is "MSBuild Version". If you created the build step before configuring any MSBuild versions, the value here will be (default). After configuring one or more MSBuild versions, the drop down will be populated with the available configurations. Select the one you require. You can see here that I've now selected the named configuration that matches the installation above.
Jenkins
10,227,967
38
Why are there two kinds of jobs for Jenkins, both the multi-configuration project and the free-style project project? I read somewhere that once you choose one of them, you can't convert to the other (easily). Why wouldn't I always pick the multi-configuration project in order to be safe for future changes? I would like to setup a build for a project building both on Windows and Unix (and other platforms as well). I found this question), which asks the same thing, but I don't really get the answer. Why would I need three matrix projects (and not three free-style projects), one for each platform? Why can't I keep them all in one matrix, with platforms AND (for example) gcc version on one axis and (my) software versions on the other? I also read this blog post, but that builds everything on the same machine, with just different Python versions. So, in short: how do most people configure a multi-configuration project targeting many different platforms?
The two types of jobs have separate functions: Free-style jobs: these allow you to build your project on a single computer or label (group of computers, for eg "Windows-XP-32"). Multi-configuration jobs: these allow you to build your project on multiple computers or labels, or a mix of the two, for eg Windows-XP, Windows-Vista, Windows-7 and RedHat - useful for checking compatibility or building for multiple platforms (qt programs?) If you have a project which you want to build on Windows & Unix, you have two options: Create a separate free-style job for each configuration, in which case you have to maintain each one individually You have one multi-configuration job, and you select 2 (or more) labels/computers/slaves - 1 for Windows and 1 for Unix. In this case, you only have to maintain one job for the build You can keep the your gcc versions on one axis, and software versions on another. There is no reason you should not be able to. The question that you link has a fair point, but one that does not relate to your question directly: in his case, he had a multi-configuration job A, which - on success - triggered another job B. Now, in a multi-configuration job, if one of the configuration fails, the entire job fails (obviously, since you want your project to build successfully on all your configurations). IMHO, for building the same project on multiple platforms, the better way to go is to use a multi-configuration style job.
Jenkins
7,515,730
38
My jenkinsfile has several paremeters, every time I make an update in the parameters (e.g. remove or add a new input) and commit the change to my SCM, I do not see the job input screen updated accordingly in jenkins, I have to run an execution, cancel it and then see my updated fields in properties([ parameters([ string(name: 'a', defaultValue: 'aa', description: '*', ), string(name: 'b', description: '*', ), string(name: 'c', description: '*', ), ]) ]) any clues?
One of the ugliest things I've done to get around this is create a Refresh parameter which basically exits the pipeline right away. This way I can run the pipeline just to update the properties. pipeline { agent any parameters { booleanParam(name: 'Refresh', defaultValue: false, description: 'Read Jenkinsfile and exit.') } stages { stage('Read Jenkinsfile') { when { expression { return parameters.Refresh == true } } steps { echo("Ended pipeline early.") } } stage('Run Jenkinsfile') { when { expression { return parameters.Refresh == false } } stage('Build') { // steps } stage('Test') { // steps } stage('Deploy') { // steps } } } } There really must be a better way, but I'm yet to find it :(
Jenkins
44,422,691
37
On my Jenkins I configured: Source Code Management Git repository: https://bitbucket.org/username/project.git credentials: username/password Builder Triggers Build when a change is pushed to BitBucket On my BitBucket Webhooks: http://Jenkins.URL:8080/bitbucket-hook I tried pushing a small change to a .txt file, but the Jenken doesn't build automatically. If I manually click "build now", it shows success. What could be the problem? In the bitbucket repository, the project is simple. I just have a text file to test. I think as long as I made any change to the text file, it should trigger a Jenkins build. Edit: In the System Log of Jenkins, it shows "Polling has not run yet.". But in Bitbucket Webhook request log, I can see all the requests.
You don't need to enable Polling SCM.. You have to ensure that your Webhook (Settings->Webhooks) is pointing to your Jenkins bitbucket-hook like the following: "https://ci.yourorg.com/bitbucket-hook/". Notice that last "/", without it, the build will not be triggered. It's an annoying thing, as you will get a 200 status code from Jenkins when sending requests, with or without it.
Jenkins
31,202,359
37
I want to use the Jenkins Remote API, and I am looking for safe solution. I came across Prevent Cross Site Request Forgery exploits and I want to use it, but I read somewhere that you have to make a crumb request. How do I get a crumb request in order to get the API working? I found this https://github.com/entagen/jenkins-build-per-branch/pull/20, but still I don't know how to fix it. My Jenkins version is 1.50.x. Authenticated remote API request responds with 403 when using POST request
I haven't found this in the documentation either. This code is tested against an older Jenkins (1.466), but should still work. To issue the crumb use the crumbIssuer // left out: you need to authenticate with user & password -> sample below HttpGet httpGet = new HttpGet(jenkinsUrl + "crumbIssuer/api/json"); String crumbResponse = toString(httpclient, httpGet); CrumbJson crumbJson = new Gson().fromJson(crumbResponse, CrumbJson.class); This will get you a response like this {"crumb":"fb171d526b9cc9e25afe80b356e12cb7","crumbRequestField":".crumb"} This contains two pieces of information you need the field name with which you need to pass the crumb the crumb itself If you now want to fetch something from Jenkins, add the crumb as header. In the sample below I fetch the latest build results. HttpPost httpost = new HttpPost(jenkinsUrl + "rssLatest"); httpost.addHeader(crumbJson.crumbRequestField, crumbJson.crumb); Here is the sample code as a whole. I am using gson 2.2.4 to parse the response and Apache's httpclient 4.2.3 for the rest. import org.apache.http.auth.*; import org.apache.http.client.*; import org.apache.http.client.methods.*; import org.apache.http.impl.client.*; import com.google.gson.Gson; public class JenkinsMonitor { public static void main(String[] args) throws Exception { String protocol = "http"; String host = "your-jenkins-host.com"; int port = 8080; String usernName = "username"; String password = "passwort"; DefaultHttpClient httpclient = new DefaultHttpClient(); httpclient.getCredentialsProvider().setCredentials( new AuthScope(host, port), new UsernamePasswordCredentials(usernName, password)); String jenkinsUrl = protocol + "://" + host + ":" + port + "/jenkins/"; try { // get the crumb from Jenkins // do this only once per HTTP session // keep the crumb for every coming request System.out.println("... issue crumb"); HttpGet httpGet = new HttpGet(jenkinsUrl + "crumbIssuer/api/json"); String crumbResponse= toString(httpclient, httpGet); CrumbJson crumbJson = new Gson() .fromJson(crumbResponse, CrumbJson.class); // add the issued crumb to each request header // the header field name is also contained in the json response System.out.println("... issue rss of latest builds"); HttpPost httpost = new HttpPost(jenkinsUrl + "rssLatest"); httpost.addHeader(crumbJson.crumbRequestField, crumbJson.crumb); toString(httpclient, httpost); } finally { httpclient.getConnectionManager().shutdown(); } } // helper construct to deserialize crumb json into public static class CrumbJson { public String crumb; public String crumbRequestField; } private static String toString(DefaultHttpClient client, HttpRequestBase request) throws Exception { ResponseHandler<String> responseHandler = new BasicResponseHandler(); String responseBody = client.execute(request, responseHandler); System.out.println(responseBody + "\n"); return responseBody; } }
Jenkins
16,738,441
37
I can find out just about everything about my Jenkins server via the Remote API, but not the list of currently running jobs. This, http://my-jenkins/computer/api/json or http://my-jenkins/computer/(master)/api/json Would seem like the most logical choices, but they say nothing (other than the count of jobs) about which jobs are actually running.
There is often confusion between jobs and builds in Jenkins, especially since jobs are often referred to as 'build jobs'. Jobs (or 'build jobs' or 'projects') contain configuration that describes what to run and how to run it. Builds are executions of a job. A build contains information about the start and end time, the status, logging, etc. See https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project for more information. If you want the jobs that are currently building (i.e. have one or more running builds), the fastest way is to use the REST API with XPath to filter on colors that end with _anime, like this: http://jenkins.example.com/api/xml?tree=jobs[name,url,color]&xpath=/hudson/job[ends-with(color/text(),%22_anime%22)]&wrapper=jobs will give you something like: <jobs> <job> <name>PRE_DB</name> <url>http://jenkins.example.com/job/my_first_job/</url> <color>blue_anime</color> </job> <job> <name>SDD_Seller_Dashboard</name> <url>http://jenkins.example.com/job/my_second_job/</url> <color>blue_anime</color> </job> </jobs> Jenkins uses the color field to indicate the status of the job, where the _anime suffix indicates that the job is currently building. Unfortunately, this won't give you any information on the actual running build. Multiple instances of the job maybe running at the same time, and the running build is not always the last one started. If you want to list all the running builds, you can also use the REST API to get a fast answer, like this: http://jenkins.example.com/computer/api/xml?tree=computer[executors[currentExecutable[url]],oneOffExecutors[currentExecutable[url]]]&xpath=//url&wrapper=builds Will give you something like: <builds> <url>http://jenkins.example.com/job/my_first_job/1412/</url> <url>http://jenkins.example.com/job/my_first_job/1414/</url> <url>http://jenkins.example.com/job/my_second_job/13126/</url> </builds> Here you see a list off all the currently running builds. You will need to parse the URL to separate the job name from the build number. Notice how my_first_job has two builds that are currently running.
Jenkins
14,843,874
37
I have tried to rename a hudson/jenkins job. However it failed to rename. Is there any way so I can rename the job?
You can rename selected job through jenkins UI by following these steps: job>configure>Advanced Project Options>Display Name Other way is to rename the directory on the Jenkins/hudson server and then restart the Jenkins.
Jenkins
14,603,615
37
Anyone know a good way to add soapUI tests to my CI builds ?
soapUI offers test automation via Maven or Ant. Maven integration is described HERE. I tried it some month ago but had some strange issues with the eviware repository... Therefore I run my tests now via Ant. What you have to do is to call the testrunner.bat (or testrunner.sh) script in the soapUI bin directory. You can find the available arguments HERE. You have to install soapUI on your Hudson build server. Then you simply create a new job which is built via Ant. Sample build.xml: <project name="IntegrationTest" default="soapui-tests" basedir="."> <description>Runs the soapUI integration tests</description> <property file="build.properties"/> <target name="checkos"> <condition property="testrunner.cmd" value="${soapUI.home}/bin/testrunner.bat"> <os family="windows" /> </condition> <condition property="testrunner.cmd" value="${soapUI.home}/bin/testrunner.sh"> <os family="unix" /> </condition> </target> <target name="soapui-tests" depends="checkos"> <exec executable="${testrunner.cmd}" failonerror="yes" failifexecutionfails="yes" > <arg value="-e ${service.endpoint}"/> <arg value="-P dbUrl=${db.Url}"/> <arg value="-rajf"/> <arg path="${report.dir}"/> <arg path="${soapui.project.folder}"/> </exec> </target> </project>
Jenkins
6,648,244
37