question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
Is it possible to make a reference to an issue in Gitlab from a commit message? I haven't found anything on the web.
There should be a way to reference a GitLab issue from a commit message, simply with #xx (id of the issue). Consider "Default closing pattern value " For example the following commit message: Awesome commit message Fix #20, Fixes #21 and Closes group/otherproject#22. This commit is also related to #17 and fixes #18, #19 and https://gitlab.example.com/group/otherproject/issues/23. will close #18, #19, #20, and #21 in the project this commit is pushed to, as well as #22 and #23 in group/otherproject. #17 won't be closed as it does not match the pattern. It works with multi-line commit messages as well as one-liners when used with git commit -m. See also "Tutorial: It's all connected in GitLab" Add references in issue or merge request descriptions or in comments. This will update the issue with info about anything related. To reference an issue: #123 To reference a MR: !123 To reference a snippet $123 pierreb adds in the comments: Although this is not specifically asked in the original question, it may be worth adding that one can also cross-reference a previous commit in a new commit message in Gitlab. You do it by copying the hash from the commit message you want to reference and simply pasting it in the new commit message. Something along these lines: This is related with commit 7as7b101 You can see (much) more in: "gitlab-ce issue 13345 'ability to reference a commit'" "Tutorial: It's all connected in GitLab". andrybak adds in the comments: "You do it by copying the hash from the commit message you want to reference and simply pasting it in the new commit message." There is also a special, more human-readable format: git log -1 --format=reference <abbrev-hash> (<title-line>, <short-author-date>) See git log PRETTY FORMATS This format is used to refer to another commit in a commit message and is the same as --pretty='format:%C(auto)%h (%s, %ad)'. By default, the date is formatted with --date=short unless another --date option is explicitly specified. As with any format: with format placeholders, its output is not affected by other options like --decorate and --walk-reflogs. Fr0zenFyr adds in the comments: Similarly, even a closed issue can be referenced like Related to #78 and #93.
GitLab
45,292,973
61
So I have a react/typescript app in my repo that I'm working on and in my repo I have a .env file that I'm ignoring so that my secrets don't get exposed and a .env-example file of important environment variables to configure. My problem is, since I'm not pushing the .env file to my repo, when I deploy my app through the google app engine(this is done in the deployment stage in my gitlab-ci.yml file), these environment variables will not be present in production and I need them for my app to work as I do something like this in my webpack.config.js file. const dotenv = require('dotenv').config({ path: __dirname + '/.env' }); and then new webpack.DefinePlugin({ 'process.env': dotenv.parsed }) Here is my .gitlab-ci file for reference in case anyone here wants to see. cache: paths: - node_modules/ stages: - build - test - deploy Build_Site: image: node:8-alpine stage: build script: - npm install --progress=false - npm run-script build artifacts: expire_in: 1 week paths: - build Run_Tests: image: node:8-alpine stage: test script: - npm install --progress=false - npm run-script test Deploy_Production: image: google/cloud-sdk:latest stage: deploy environment: Production only: - master script: - echo $DEPLOY_KEY_FILE_PRODUCTION > /tmp/$CI_PIPELINE_ID.json - gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json - gcloud config set project $PROJECT_ID_PRODUCTION - gcloud info - gcloud --quiet app deploy after_script: - rm /tmp/$CI_PIPELINE_ID.json Also, feel free to critique my gitlab-ci.yml file so I can make it better.
I don't know if you still need this, but this is how I achieved, what you wanted to. Create your environment variables in your gitlab repo config Create setup_env.sh: #!/bin/bash echo API_URL=$API_URL >> .env echo NODE_ENV=$NODE_ENV >> .env Modify your .gitlab-ci.yml. Upsert below to your before_script: section - chmod +x ./setup_env.sh - ./setup_env.sh In webpack.config.js make use of https://www.npmjs.com/package/dotenv require('dotenv').config(); This passes your .env variables available in webpack.config.js file. Add this to your plugins array (add those variables you need): new webpack.DefinePlugin({ 'process.env.API_URL': JSON.stringify(process.env.API_URL), 'process.env.NODE_ENV': JSON.stringify(process.env.NODE_ENV), ... }) Now your deployment should use your environment variables specified in you gitlab settings.
GitLab
52,540,316
61
I am trying to test to see if my SSH key was added correctly by following the instructions which are found halfway down the page here. To test whether your SSH key was added correctly, run the following command in your terminal (replacing gitlab.com with your GitLab's instance domain): ssh -T [email protected] However, I have no idea what my "Gitlab's instance domain" is referring to. I have searched other online but I cannot find anything relevant.
If you're using Gitlab on gitlab.com then the domain is gitlab.com, so you should run  ssh -T [email protected]
GitLab
56,033,742
61
When I do this: git clone https://example.com/root/test.git I am getting this error: fatal: HTTP request failed When I use SSH: git clone username [email protected]:root/test.git I am getting this error: Initialized empty Git repository in /server/user/[email protected]:root/test.git/.git/ fatal: 'user' does not appear to be a git repository fatal: The remote end hung up unexpectedly It's a private repository, and I have added my SSH keys.
It looks like there's not a straightforward solution for HTTPS-based cloning regarding GitLab. Therefore if you want a SSH-based cloning, you should take account these three forthcoming steps: Create properly an SSH key using your email used to sign up. I would use the default filename to key for Windows. Don't forget to introduce a password! (tip: you can skip this step if you already have one ssh key here) $ ssh-keygen -t rsa -C "[email protected]" -b 4096 Generating public/private rsa key pair. Enter file in which to save the key ($PWD/.ssh/id_rsa): [\n] Enter passphrase (empty for no passphrase):[your password] Enter same passphrase again: [your password] Your identification has been saved in $PWD/.ssh/id_rsa. Your public key has been saved in $PWD/.ssh/id_rsa.pub. Copy and paste all content from the recently id_rsa.pub generated into Setting>SSH keys>Key from your GitLab profile. # Copy to clipboard pbcopy < ~/.ssh/id_rsa.pub Get locally connected: $ ssh -i $PWD/.ssh/id_rsa [email protected] Enter passphrase for key "$PWD/.ssh/id_rsa": [your password] PTY allocation request failed on channel 0 Welcome to GitLab, you! Connection to gitlab.com closed. Finally, clone any private or internal GitLab repository! $ git clone https://git.metabarcoding.org/obitools/ROBIBarcodes.git Cloning into 'ROBIBarcodes'... remote: Counting objects: 69, done. remote: Compressing objects: 100% (65/65), done. remote: Total 69 (delta 14), reused 0 (delta 0) Unpacking objects: 100% (69/69), done.
GitLab
30,202,642
60
I forked a GitHub repository and made some changes on my fork and submited a pull request but the owners of the original GitHub repository asked for some changes which they asked me for in the pull request. I assumed that adding additional changes to my fork will cause them to show up in the current pull request but to my surprise I can't see my changes in the pull request. This is what I did after generating the original pull request: made code changes add the files 1git add -A` commit the files git commit -m "these are my suggested changes in pull request" submit my changes with git push I can see changes on my own fork but I don't understand why I can't see any changes in the pull request. Does anyone know what I need to do for my changes to show up in the current pull request? I'd really appreciate your help.
First, make sure that you did push to the right branch and that the commit still doesn't show up in the PR. For me, the commit showed up when I viewed the specific branch in the repo, but didn't show in the PR and didn't trigger CI. This is what fixed the issue for me: Click on Edit in the top right corner. Click on the base selector and choose the same base branch. GitHub will ask you whether you want to change base, confirm. The PR should update and the commit should show up.
GitLab
45,626,986
60
I am looking for a git hosting environment for several users. Therefore i've searched for comparisons between Gitolite, Gitlab and Gitorius. But i get nothing what could be useful. Is there anybody, who has experiences with different hosting tools and could provide an advice?
Gitolite is not a git hosting environment: it is an authorization layer, which grants or denies access to a git repo. It is the https or ssh layer which allows the authentication of a user, and the git repo hosting itself. See "How do programs like gitolite work?" GitLab and Gitorious both provide git hosting, with (to my opinion) an easier installation process and more features for GitLab. When comparing GitLab features and Gitorious features, the issue tracker is more complete, the fork and authorization levels are present, GitLab has teams and groups, webhooks, and API. The development is more active (one release every month) on GitLab. Update: GitLab has acquired Gitorious
GitLab
17,167,414
59
Situation I have two servers, Production and Development. On Production server, there are two applications and multiple (6) databases (MySQL) which I need to distribute to developers for testing. All source codes are stored in GitLab on Development server and developers are working only with this server and don't have access to production server. When we release an application, master logs into production and pulls new version from Git. The databases are large (over 500M each and counting) and I need to distribute them as easy as possible to developers for testing. Possible solutions After a backup script which dumps databases, each to a single file, execute a script which pushes each database to its own branch. A developer pulls one of these branches if he wants to update his local copy. This one was found non working. Cron on production server saves binary logs every day and pushes them into the branch of that database. So, in the branch, there are files with daily changes and developer pulls the files he doesn't have. The current SQL dump will be sent to the developer another way. And when the size of the repository becomes too large, we will send full dump to the developers and flush all data in the repository and start from the beginning. Questions Is the solution possible? If git is pushing/pulling to/from repository, does it upload/download whole files, or just changes in them (i.e. adds new lines or edits the current ones)? Can Git manage so large files? No. How to set how many revisions are preserved in a repository? Doesn't matter with the new solution. Is there any better solution? I don't want to force the developers to download such large files over FTP or anything similar.
Update 2017: Microsoft is contributing to Microsoft/GVFS: a Git Virtual File System which allows Git to handle "the largest repo on the planet" (ie: the Windows code base, which is approximately 3.5M files and, when checked in to a Git repo, results in a repo of about 300GB, and produces 1,760 daily “lab builds” across 440 branches in addition to thousands of pull request validation builds) GVFS virtualizes the file system beneath your git repo so that git and all tools see what appears to be a normal repo, but GVFS only downloads objects as they are needed. Some parts of GVFS might be contributed upstream (to Git itself). But in the meantime, all new Windows development is now (August 2017) on Git. Update April 2015: GitHub proposes: Announcing Git Large File Storage (LFS) Using git-lfs (see git-lfs.github.com) and a server supporting it: lfs-test-server, you can store metadata only in the git repo, and the large file elsewhere. Maximum of 2 Gb per commit. See git-lfs/wiki/Tutorial: git lfs track '*.bin' git add .gitattributes "*.bin" git commit -m "Track .bin files" Original answer: Regarding what the git limitations with large files are, you can consider bup (presented in details in GitMinutes #24) The design of bup highlights the three issues that limits a git repo: huge files (the xdelta for packfile is in memory only, which isn't good with large files) huge number of file, which means, one file per blob, and slow git gc to generate one packfile at a time. huge packfiles, with a packfile index inefficient to retrieve data from the (huge) packfile. Handling huge files and xdelta The primary reason git can't handle huge files is that it runs them through xdelta, which generally means it tries to load the entire contents of a file into memory at once. If it didn't do this, it would have to store the entire contents of every single revision of every single file, even if you only changed a few bytes of that file. That would be a terribly inefficient use of disk space, and git is well known for its amazingly efficient repository format. Unfortunately, xdelta works great for small files and gets amazingly slow and memory-hungry for large files. For git's main purpose, ie. managing your source code, this isn't a problem. What bup does instead of xdelta is what we call "hashsplitting." We wanted a general-purpose way to efficiently back up any large file that might change in small ways, without storing the entire file every time. We read through the file one byte at a time, calculating a rolling checksum of the last 128 bytes. rollsum seems to do pretty well at its job. You can find it in bupsplit.c. Basically, it converts the last 128 bytes read into a 32-bit integer. What we then do is take the lowest 13 bits of the rollsum, and if they're all 1's, we consider that to be the end of a chunk. This happens on average once every 2^13 = 8192 bytes, so the average chunk size is 8192 bytes. We're dividing up those files into chunks based on the rolling checksum. Then we store each chunk separately (indexed by its sha1sum) as a git blob. With hashsplitting, no matter how much data you add, modify, or remove in the middle of the file, all the chunks before and after the affected chunk are absolutely the same. All that matters to the hashsplitting algorithm is the 32-byte "separator" sequence, and a single change can only affect, at most, one separator sequence or the bytes between two separator sequences. Like magic, the hashsplit chunking algorithm will chunk your file the same way every time, even without knowing how it had chunked it previously. The next problem is less obvious: after you store your series of chunks as git blobs, how do you store their sequence? Each blob has a 20-byte sha1 identifier, which means the simple list of blobs is going to be 20/8192 = 0.25% of the file length. For a 200GB file, that's 488 megs of just sequence data. We extend the hashsplit algorithm a little further using what we call "fanout." Instead of checking just the last 13 bits of the checksum, we use additional checksum bits to produce additional splits. What you end up with is an actual tree of blobs - which git 'tree' objects are ideal to represent. Handling huge numbers of files and git gc git is designed for handling reasonably-sized repositories that change relatively infrequently. You might think you change your source code "frequently" and that git handles much more frequent changes than, say, svn can handle. But that's not the same kind of "frequently" we're talking about. The #1 killer is the way it adds new objects to the repository: it creates one file per blob. Then you later run 'git gc' and combine those files into a single file (using highly efficient xdelta compression, and ignoring any files that are no longer relevant). 'git gc' is slow, but for source code repositories, the resulting super-efficient storage (and associated really fast access to the stored files) is worth it. bup doesn't do that. It just writes packfiles directly. Luckily, these packfiles are still git-formatted, so git can happily access them once they're written. Handling huge repository (meaning huge numbers of huge packfiles) Git isn't actually designed to handle super-huge repositories. Most git repositories are small enough that it's reasonable to merge them all into a single packfile, which 'git gc' usually does eventually. The problematic part of large packfiles isn't the packfiles themselves - git is designed to expect the total size of all packs to be larger than available memory, and once it can handle that, it can handle virtually any amount of data about equally efficiently. The problem is the packfile indexes (.idx) files. each packfile (*.pack) in git has an associated idx (*.idx) that's a sorted list of git object hashes and file offsets. If you're looking for a particular object based on its sha1, you open the idx, binary search it to find the right hash, then take the associated file offset, seek to that offset in the packfile, and read the object contents. The performance of the binary search is about O(log n) with the number of hashes in the pack, with an optimized first step (you can read about it elsewhere) that somewhat improves it to O(log(n)-7). Unfortunately, this breaks down a bit when you have lots of packs. To improve performance of this sort of operation, bup introduces midx (pronounced "midix" and short for "multi-idx") files. As the name implies, they index multiple packs at a time.
GitLab
17,888,604
59
I build the project at gitlab ci ./gradlew assembleDebug --stacktrace and sometimes it throws an error: FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:transformClassesWithDexBuilderForDebug'. > com.android.build.api.transform.TransformException: java.lang.IllegalStateException: Dex archives: setting .DEX extension only for .CLASS files At my local pc it works correctly. kotlin version is 1.2 multidex is enabled What is the reason of this error?
./gradlew clean fixed the same error for me.
GitLab
47,791,227
59
Currently when I start a build in GitlabCI it is running under gitlab-runner user. I want to change it the company's internal user. I didn't find any parameter to the /etc/gitlab-runner/config.toml which is solve that. My current configuration: concurrent = 1 [[runners]] name = "deploy" url = "" token = "" executor = "shell"
Running ps aux | grep gitlab you can see: /usr/bin/gitlab-ci-multi-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --syslog --user gitlab-runner Service is running with option --user. So let's change this, it depends on what distro. you are running it. If systemd, there is a file: /etc/systemd/system/gitlab-runner.service: [Service] StartLimitInterval=5 StartLimitBurst=10 ExecStart=/usr/bin/gitlab-ci-multi-runner "run" "--working-directory" "/home/gitlab-runner" "--config" "/etc/gitlab-runner/config.toml" "--se Bingo, let's change this file now: gitlab-runner uninstall gitlab-runner install --working-directory /home/ubuntu --user ubuntu reboot the machine or reload the service (i.e. systemctl daemon-reload), et voilà!
GitLab
37,187,899
58
I created a merge request on gitlab (local) server. Now whenever I click on the merge request, the request times out with error 500. Before that I used to get an error code 504 and I applied the change mentioned in this gitlab support topic. All I want to do is remove the merge request. Is there a manual way of doing this?
Web UI Option Today I discovered a way to do this with the Web UI. So for Merge Request 14 https://gitlab.example.com/MyGroup/MyProject/merge_requests/14/edit On the bottom Right you should see a red Delete button. PowerShell Option Invoke-RestMethod -Method Delete -Uri 'https://gitlab.example.com/api/v4/projects/PROJECT_ID_GOES_HERE/merge_requests/14' -Headers @{'PRIVATE-TOKEN'='PRIVATE_TOKEN_GOES_HERE'}
GitLab
32,194,657
57
I have a Gitlab CI runner running on windows 10: before_script: - "echo off" - 'call "%VS120COMNTOOLS%\vsvars32.bat"' - echo. - set - echo. stages: - build build: stage: build script: - 'StatusTest.exe' #- msbuild... I am trying to fail the build with StatusText.exe (I tried returning status codes -1,0,1; throwing an exception, etc.) But Runner only logs the exception and continues with following steps. What determines that CI shell runner should fail the build and not proceed to next step? Output: ... windows_tracing_logfile=C:\BVTBin\Tests\installpackage\csilogfile.log $ echo. $ StatusTest.exe Unhandled Exception: System.Exception: tralala at StatusTest.Program.Main(String[] args) $ echo "Restoring NuGet Packages..." ...
What determines that CI shell runner should fail the build and not proceed to next step? If a pipeline job exits with the code other than 0 then that job fails causing all the following jobs in the pipeline to be skipped. This behaviour can be changed on a per job basis with allow_failure job keyword. To make a job to fail forcefully you need to artificially exit from a job with code other than 0. Here is an gitlab-ci.yml job example : some-example-job: script: - # .... - exit 1 See the GitLab CI UI sreeenshot example. The third job has failed. On the opposite remove exit 0 and your job would succeed if the remaining script section commands do not exit with code other than 0. Now see all the jobs & the entire pipeline finished successfully.
GitLab
36,619,212
57
I am working on a project which I cloned from a remote repository hosted on GitLab. I made some changes to the project, but I didn’t create any branch and want to now start work on some other new features, but without first pushing my existing work to the remote repository. I might discard the changes in the new feature or might need to push both the new feature as well as the earlier changes to the remote repository, at a later stage. From what I know about Git, I think I need to create a new local branch, which I can do using git checkout -b NEW_BRANCH_NAME. Is this the correct way to accomplish what I am trying to do? When I use this command, it creates a new branch. How do I switch back and forth between working on this new branch and the earlier one?
You switch back and forth between branches using git checkout <branch name>. And yes, git checkout -b NEW_BRANCH_NAME is the correct way to create a new branch and switching to it. At the same time, the command you used is a shorthand to git branch <branch name> and git checkout <branch name>.
GitLab
66,882,952
57
Not long ago, we have made the switch from SVN to Git. A few days ago, I realized that all of our team gets those messages when they push : $ git push Counting objects: 32, done. Delta compression using up to 8 threads. Compressing objects: 100% (19/19), done. Writing objects: 100% (32/32), 2.94 KiB | 0 bytes/s, done. Total 32 (delta 14), reused 0 (delta 0) error: The last gc run reported the following. Please correct the root cause and remove gc.log. Automatic cleanup will not be performed until the file is removed. warning: There are too many unreachable loose objects; run 'git prune' to remove them. To [email protected]:root/xxx.git 15c3bbb..69e6d8b xxxx -> xxx I thought it was coming from my computer for a while, until I realize that everybody has the same issues. Needless to say, there is no gc.log in my .git folder, and using 'git gc' or 'git prune' has no effect. So my question is : Could it be that the repository hosted on the server is somehow not clean? If so, how to I actually clean it? All of the solutions I have found so far relate to local copies of repositories. Also, we use Gitlab to host our repos. EDIT : It is worth saying that I have since I posted this question also tried "Housecleaning" the repository using Gitlab but with no result so far.
This is followed by issue 14357 (GitLab 8.6- or less) The manual fix was: SSH into worker1 cd into the gitlab-org/gitlab-ce directory ran rm gc.log, this just contained the line " warning: There are too many unreachable loose objects; run 'git prune' to remove them." ran git prune and prayed it didn't break things (which it thankfully didn't) But it looks like, starting GitLab 8.7, auto gc is disabled. This is also done in the context of (still opened) issue 13524: Typically after a rebase, amend or other action that requires a force push we can have dangled commits. Such "dereferenced" commits are getting lost due to git gc that may be executed internally or by using GitLab Housekeeping features. If it happens that there was a discussion attached to a specific commit - it is not available after dereferenced commit has been garbage-collected. Commits are being recorded in push events and are available through system notes added to merge request, and currently this produces error 500 in GitLab. Update: that issue was closed a month later (July 2016) with: MR 5062: Don't garbage collect commits that have related DB records like comments Makes sure a commit is kept around when Git garbage collection runs. Git GC will delete commits from the repository that are no longer in any branches or tags, but we want to keep some of these commits around, for example if they have comments or CI builds. MR 4101: Refactor: Convert existing array-based diff refs to the DiffRefs model
GitLab
37,732,141
56
I am having a syntax error when I test my gitlab-ci.yml in CI Lint. Can someone suggest a solution to this problem? build-production: stage: build only: - master image: name: gcr.io/kaniko-project/executor:debug entrypoint: [""] script: - mkdir -p /kaniko/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json - /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG rules: - if: $CI_COMMIT_TAG Status: syntax is incorrect jobs:build-production config key may not be used with `rules`: only
Documentation is pretty clear : rules replaces only/except and they can’t be used together in the same job. If you configure one job to use both keywords, the linter returns a key may not be used with rules error. I suggest to use rules: for both of your conditions : rules: - if: '$CI_COMMIT_REF_NAME == "master" && $CI_COMMIT_TAG'
GitLab
67,010,580
56
I have a Dockerfile which is going to be implemented FROM a private registry's image. I build this file without any problem with Docker version 1.12.6, build 78d1802 and docker-compose version 1.8.0, build unknown, but in another machine which has Docker version 17.06.1-ce, build 874a737 and docker-compose version 1.16.1, build 6d1ac21, the docker-compose build returns: FROM my.private.gitlab.registry:port/image:tag http://my.private.gitlab.registry:port/v2/docker/image/manifests/tag: denied: access forbidden docker pull my.private.gitlab.registry:port/image:tag returns the same. Notice that I tried to get my.private.registry:port/image:tag and http://my.private.registry:port/v2/docker/image/manifests/tag has been catched.
If this is an authenticated registry, then you need to run docker login <registryurl> on the machine where you are building this. This only needs to be done once per host. The command then caches the auth in a file $ cat ~/.docker/config.json { "auths": { "https://index.docker.io/v1/": { "auth": "......=" } } }
GitLab
46,418,652
54
I have an unity ci-project. .gitlab-ci.yml contains base .build job with one script command. Also I have multiple specified jobs for build each platform which extended base .build. I want to execute some platform-specific commands for android, so I have created separated job generate-android-apk. But if it's failing the pipeline will be failed too.(I know about allow_failure). Is it possible to extend script section between jobs without copy-pasting?
UPDATE: since gitlab 13.9 it is possible to use !reference tags from other jobs or "templates" (which are commented jobs - using dot as prefix) actual_job: script: - echo doing something .template_job: after_script: - echo done with something job_using_references_from_other_jobs: script: - !reference [actual_job, script] after_script: - !reference [.template_job, after_script] Thanks to @amine-zaine for the update FIRST APPROACH: You can achieve modular script sections by utilizing 'literal blocks' (using |) like so: .template1: &template1 | echo install .template2: &template2 | echo bundle testJob: script: - *template1 - *template2 See Source ANOTHER SOLUTION: Since GitLab 11.3 it is possible to use extend which could also work for you. .template: script: echo test template stage: testStage only: refs: - branches rspec: extends: .template1 after_script: - echo test job only: variables: - $TestVar See Docs More Examples
GitLab
53,175,030
54
Q1: Whats the difference between concurrent = 3 [[runners]] .. executor = "shell" and concurrent = 3 [[runners]] ... executor = "shell" [[runners]] ... executor = "shell" [[runners]] ... executor = "shell" Q2: Does it makes sense, to... have 3 executors (workers) of same type on a single runner with global concurrent = 3? Or can single executor with global concurrent = 3 do multiple jobs in parallel safely? Q3: How they're related... runners.limit with runners.request_concurrency and concurrent Thanks
Gitlab's documentation on runners describes them as: (...) isolated (virtual) machines that pick up jobs through the coordinator API of GitLab CI Therefore, each runner is an isolated process responsible for picking up requests for job executions and for dealing with them according to pre-defined configurations. As an isolated process, each runner have the capability of creating 'sub-processes' (also called machines) in order to run jobs. When you define in your config.toml a [[runner]] section, you're configuring a runner and setting how it should deal with job execution requests. In your questions, you mentioned two of those "how to deal with job execution request"' settings: limit: "Limit how many jobs can be handled concurrently". In other words, how many 'sub-processes' can be created by a runner in order to execute jobs simultaneously; request_concurrency: "Limit number of concurrent requests for new jobs from GitLab". In other words, how many job execution requests can a runner take from GitLab CI job queue simultaneously. Also, there are some settings that apply to a machine globally. In your question you mentioned one of them: concurrent: "Limit how many jobs globally can be run concurrently. This is the most upper limit of number of jobs using all defined runners". In other words, it limits the maximum amount of 'sub-processes' that can run jobs simultaneously. Thus, keeping in mind the difference between a runner its sub-processes and also the difference between specific runner settings and global machine settings: Q1: The difference is that in your 1st example you have one runner and in your 2nd example you have three runners. It's worth mentioning that in both examples your machine would only allow running 3 jobs simultaneously. Q2: Not only a single runner can run multiple jobs concurrently safely but also is possible to control how many jobs you want it to handle (using the aforementioned limit setting). Also, there is no problem to have similar runners running in the same machine. How you're going to define your runner's configurations is up to you and your infrastructure capabilities. Also, please notice that an executor only defines how to run your job. It isn't the only thing that defines a runner and it isn't a synonymous for "worker". The ones working are your runners and their sub-processes. Q3: To summarize: You can define one or many workers at the same machine. Each one is an isolated process. A runner's limit is how many sub-processes of a runner process can be created to run jobs concurrently. A runner's request_concurrency is how many requests can a runner handle from the Gitlab CI job queue. Finally, setting a value to concurrent will limit how many jobs can be executed at your machine at the same time in the one or more runners running in the machine. References For better understanding, I really recommend you read about Autoscaling algorithm and parameters. Finally, I think you might find this question on how to run runners in parallel on the same server useful.
GitLab
54,534,387
54
First time GitLab user and I'm a bit confused when reviewing someone's merge request. Whenever I add a comment I get prompted with two options: Submit review and Add comment now. What's the difference between the two? Why is there a need for two options?
Just heard this from my colleagues, so not sure if this is 100% accurate: Add comment now: instantly adds the comment to the review and notifies the reviewer that a comment has been added. So if you choose this option X times, the reviewer receives X notifications. Submit review: You can add as many comments as you want in as many files as you want. If you choose this option at the end of your review, the reviewer will receive only 1 notification containing all of your comments.
GitLab
71,407,947
54
How can we configure gitlab to keep only the last 10 CI jobs/builds and keep deleting the rest? For example , in Jenkins , we can configure the job to keep only last X builds.
UPDATE As of Gitlab Release 12.6, deleting a pipeline is now an option in the GUI for users in role Owner: Click the pipeline you want to remove in the pipelines list Hit the red Delete button in the upper right of the pipeline details page. As of Gitlab Release 11.6, deleting a pipeline is now an option, for users in Owner role only, via the API. You need: An API token The id of the project The pipeline_id of the pipeline you wish to remove. Example using curl from the docs for project id: 1 and pipeline_id: 4: curl --header "PRIVATE-TOKEN: <your_access_token>" --request "DELETE" "https://gitlab.example.com/api/v4/projects/1/pipelines/46" Documentation is here
GitLab
53,355,578
53
My gitlab is on a virtual machine on a host server. I reach the VM with a non-standard SSH port (i.e. 766) which an iptable rule then forward from host:766 to vm:22. So when I create a new repo, the instruction to add a remote provide a mal-formed URL (as it doesn't use the 766 port. For instance, the web interface give me this: Malformed git remote add origin [email protected]:group/project.git Instead of an URL containing :766/ before the group. Wellformed git remote add origin [email protected]:766/group/project.git So it time I create a repo, I have to do the modification manually, same for my collaborator. How can I fix that ?
In Omnibus-packaged versions you can modify that property in the /etc/gitlab/gitlab.rb file: gitlab_rails['gitlab_shell_ssh_port'] = 766 Then, you'll need to reconfigure GitLab: # gitlab-ctl reconfigure Your URIs will then be correctly displayed as ssh://[email protected]:766/group/project.git in the web interface.
GitLab
18,517,189
52
I have found that Gitlab and SourceTree support icons for every repositories which make them more specific and easy to find at one glance. How is this possible?
We as a developer sometimes need a change to make our tools look different. You can add a small(I prefer 96px x 96px) logo.png image file to the root of your repository. Which makes your project more specific and easy to find with gitlab or SourceTree git client. Unfortunately github does not support this feature. It is very simple but it works! Update: Thanks to your comments I have found another way within Gitlab repository settings: hope you enjoy this trick and make your tools more fun :)
GitLab
40,049,648
52
After Gitlab switched its markdown engine to CommonMark it's no longer as easy to add things like custom styling to your markdown files. I've used Gitlab for some time and for the longest time I've liked how nicely I could make my README.md file look, having a centered icon, title and description for my project. When they switched the engine all my markdown files that relied on having such stylings look really bad. How do I center text in Gitlab after the transition to CommonMark?
Update I checked out an old project of mine and noticed that it was already centered. It turns out that CommonMark allows you to set align="center" on <div> tags as well! So, the simplest solution for centering is currently (note the empty line after the opening <div>: <div align="center"> # This is gonna be centered! </div> Original The only CommonMark html object that supports centering (as far as I know) is when you center a table cell. First you might've thought about just making a table and then using align="center", but the table won't take up the entire width of the page, so you'd get a small table on the left hand side of the page, which wouldn't solve our problem of wanting to center stuff relative to the page rather than the table. To get around this we set the table width (not using CSS with an inline style tag since it's not supported in CommonMark at the time of writing) to a large value that will take up way more than the total width of the page. Since the max-width: CSS property of tables in Gitlab markdown is 100% it means that by setting a ridiculously high width="" we're essentially setting the table width: to 100% by using only the allowed pure html width="" property. The markdown below if placed in e.g. README.md in your Gitlab project will result in a 100% width table with a centered image, title and description. The most notable part is that we're setting width="9999" on the <td> element in the table. <table align="center"><tr><td align="center" width="9999"> <img src="/icon.png" align="center" width="150" alt="Project icon"> # MyProject Description for my awesome project </td></tr></table> ... More content Below you can see an example of how your README.md file could look on Gitlab using the above markdown.
GitLab
53,273,660
52
I am deploying my React app using GitLab Pages, and it works well. Here is my gitlab-ci.yml: # Using the node alpine image to build the React app image: node:alpine # Announce the URL as per CRA docs # https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#advanced-configuration variables: PUBLIC_URL: / # Cache node modules - speeds up future builds cache: paths: - client/node_modules # Name the stages involved in the pipeline stages: - deploy # Job name for gitlab to recognise this results in assets for Gitlab Pages # https://docs.gitlab.com/ee/user/project/pages/introduction.html#gitlab-pages-requirements pages: stage: deploy script: - cd client - npm install # Install all dependencies - npm run build --prod # Build for prod - cp public/index.html public/404.html # Not necessary, but helps with https://medium.com/@pshrmn/demystifying-single-page-applications-3068d0555d46 - mv public _public # CRA and gitlab pages both use the public folder. Only do this in a build pipeline. - mv build ../public # Move build files to public dir for Gitlab Pages artifacts: paths: - public # The built files for Gitlab Pages to serve only: - master # Only run on master branch Now, I just created a dev version, based on my branch develop I would like to have 2 versions of my React app with 2 different URLs. How can I do that? For example right now, I have: my-react-app.com linked to master branch How should I have dev.my-react-app.com or even my-react-app.gitlab.io linked to develop branch?<
I've had success using the browsable artifacts for this purpose. In your example, you would create a job for your develop branch and set the PUBLIC_URL to the path on gitlab.io where the job's artifacts are published: develop: artifacts: paths: - public environment: name: Develop url: "https://$CI_PROJECT_NAMESPACE.gitlab.io/-/$CI_PROJECT_NAME/-/jobs/$CI_JOB_ID/artifacts/public/index.html" script: | # whatever stage: deploy variables: PUBLIC_URL: "/-/$CI_PROJECT_NAME/-/jobs/$CI_JOB_ID/artifacts/public" Setting the environment as indicated produces a »Review app« link in relevant merge requests, allowing you to get to the artifacts with a single click. Note: if your repository is in a subgroup, you need to insert the subgroup name in two places above above between /-/ and $CI_PROJECT_NAME for the resulting URLs to work.
GitLab
55,596,789
52
I am using GitLab 7.7.2 and tried to remove Tag in a repository in GitLab. I could remove tag in a local repository but cannot remove tag in origin. How do I remove tag in GitLab repository? $ git tag -d Tag_AAA Deleted tag 'Tag_AAA' (was d10bff2) $ git push --delete origin Tag_AAA remote: GitLab: You don't have permission To [email protected]:root/Repository.git ! [remote rejected] Tag_AAA (pre-receive hook declined) error: failed to push some refs to '[email protected]:root/Repository.git'
# delete locally: git tag -d <tag> # delete remotely: git push origin :refs/tags/<tag> # another way to delete remotely: git push --delete origin <tag>
GitLab
30,858,781
51
I am trying to push to master branch of a repo and I am failing to do so, since it is protected. I tried to look into the project settings and do not see any option for protected branches. The only option I could see is members. remote: GitLab: You are not allowed to push code to protected branches on this project. To [email protected]:cmd/release.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to '[email protected]:cmd/release.git' My repo has only one branch, with no contents in it so far. I do see protected branches options of my other Repos but not for this specific one. It is a new repo with no contents and with only default branch. I have the master permission. Unfortunately I am not able to upload the image here somehow. Please suggest how to push code to master branch.
12/17/2018 1. git push: "error: failed to push some refs to" git push -f: "remote rejected" 2. the branch is in a protected state and cannot be forced to operate. Gitlab - Repository - Branches 3. temporarily remove branch protection. Gitlab - Settings - Repository - Protected Branches - Unprotect 4. try pushing again git push -f 5. may add protection
GitLab
42,073,357
51
What is the easiest method to list all the projects and groups in GitLab using my private token?
If only your private token is available, you can only use the API: PROJECTS Use the following command to request projects: curl "https://<host>/api/v4/projects?private_token=<your private token>" This will return you the first 20 entries. To get more you can add the paramater per_page curl "https://<host>/api/v4/projects?private_token=<your private token>&per_page=100" with this parameter you can request between 20and 100 entries (see REST API Pagination documentation). If you now want all projects, you have to loop through the pages. To get to another page add the parameter page. curl "https://<host>/api/v4/projects?private_token=<your private token>&per_page=100&page=<page_number>" Now you may want to know how many pages there are. For that, add the curl parameter --head. This will not return the payload, but the header. The result should look like this: HTTP/1.1 200 OK Server: nginx Date: Thu, 13 Jul 2017 17:43:24 GMT Content-Type: application/json Content-Length: 29428 Cache-Control: no-cache Link: <request link> Vary: Origin X-Frame-Options: SAMEORIGIN X-Next-Page: 2 X-Page: 1 X-Per-Page: 20 X-Prev-Page: X-Request-Id: 80ecc167-4f3f-4c99-b09d-261e240e7fe9 X-Runtime: 4.117558 X-Total: 312257 X-Total-Pages: 15613 Strict-Transport-Security: max-age=31536000 The two interesting parts here are X-Totaland X-Total-Pages. The first is the count of available entries and the second the count of total pages. I suggest to use python or some other kind of script to handle the requests and concat the results at the end. If you want to refine the search, consult this wiki page: https://docs.gitlab.com/ce/api/projects.html#projects-api GROUPS For groups simply replace projects with groups in the curls. https://docs.gitlab.com/ce/api/groups.html#list-groups UPDATE: Here is the official list of Gitlab API clients/wrappers: https://docs.gitlab.com/ee/api/rest/#third-party-clients I highly recommend using one of these.
GitLab
45,068,904
50
I typed wrong ID (my mistake) and I think my computer's IP is permanently banned. I'd like to un-ban my IP so that I can git clone to my desired git repository. But when I tried to git clone my git repository, it says remote: HTTP Basic: Access denied fatal: Authentication failed for "~~my repository" How can I access my git repository again? Or how can I reset my banned state? I think typing a wrong ID only once and be permanently banned is somewhat harsh.
It seems that your credential manager stored wrong authentication and reuses it. Reset it. git config --system --unset credential.helper More information: Remove credentials from Git GitLab remote: HTTP Basic: Access denied and fatal Authentication
GitLab
51,581,582
50
I'm using Gitlab CI, and so have been working on a fairly complex .gitlab-ci.yml file. The file has an after_script section which runs when the main task is complete, or the main task has failed somehow. Problem: I need to do different cleanup based on whether the main task succeeded or failed, but I can't find any Gitlab CI variable that indicates the result of the main task. How can I tell, inside the after_script section, whether the main task has succeeded or failed?
Since gitlab-runner 13.5, you can use the CI_JOB_STATUS variable. test_job: # ... after_script: - > if [ $CI_JOB_STATUS == 'success' ]; then echo 'This will only run on success' else echo 'This will only run when job failed or is cancelled' fi See GitLab's documentation on predefined_variables: https://docs.gitlab.com/ee/ci/variables/predefined_variables.html
GitLab
49,867,981
49
we are using Gitlab 8.10.1 with many groups and projects. Many of the projects happen to be forks of other projects. Our problem is that whenever somebody opens a merge request for a project the default target branch is NOT the default branch of the project but from one very specific other project. Is there a way to override this setting somehow? Just to make it clear, I know how to set the default branch of a project and those settings appear to be correct, however gitlab doesn't seem to use them when creating merge requests. This issue is very annoying and has led to weird situations when people didn't pay attention and made merge requests with a completely different "master" as target.
The default MR target depends on whether or not the repository is a GitLab fork. Forks If the repository is a GitLab fork, then the default MR target will be the default branch of the upstream repository. This relationship can be removed via the "Remove fork relationship" option on the Project settings page, after which the default MR target will be determined as normal for a non-fork repository (described below). At time of writing, it is not possible to override the default MR target without removing the fork relationship, but that functionality has been requested in gitlab issue #14522. Non-Forks If the repository has no fork relationship, then the Default Branch setting on the Project settings page sets both (1) the default MR target, and (2) the HEAD reference of the repo on the GitLab server (which determines the branch that's checked out when the repo is cloned). Note that, due to a bug/quirk in git, problems can occur if a branch that was once the Default Branch is later deleted from GitLab. At time of writing, it is not possible to change the default MR target independently of the Default Branch, but this functionality has been requested in gitlab issue #17909.
GitLab
38,913,163
48
I'm using latest gitlab and the integrated docker registry. For each project I create an individual deploy token. On the host where I want to deploy the images I do docker login https://registry.example.com/project1, enter the deploy token and get success. Pulling the image just works fine. On the same host I need to deploy another image from the same registry. So I do docker login https://registry.example.com/project2, the the deploy token (which is differrent to token 1, because each project has its own deploy tokens) and get success. However looking in the .docker/config.json I can see docker just stores the domain, not the full url, and so replaces the old auth token with the new one. So I can only pull image 2 now, but not image 1 anymore. Is this a bug in docker? How to use more than one auth/ deploy token for the same registry?
You can use the --config option of the Docker client to store multiple credentials into different paths: docker --config ~/.project1 login registry.example.com -u <username> -p <deploy_token> docker --config ~/.project2 login registry.example.com -u <username> -p <deploy_token> Then you are able to call Docker commands by selecting your credential: docker --config ~/.project1 pull registry.example.com/project1 docker --config ~/.project2 pull registry.example.com/project2
GitLab
50,177,884
48
I'm trying to build the CI pipeline in GitLab. I'd like to ask about making the docker work in GitLab CI. From this issue: https://gitlab.com/gitlab-org/gitlab-runner/issues/4501#note_195033385 I'm follow the instruction for both ways. With TLS and not used TLS. But It's still stuck. Which in same error Cannot connect to the Docker daemon at tcp://localhost:2375/. Is the docker daemon running I've try to troubleshooting this problem. follow by below, enable TLS Which used .gitlab-ci.yml and config.toml for enable TLS in Runner. This my .gitlab-ci.yml: image: docker:19.03 variables: DOCKER_HOST: tcp://localhost:2375/ DOCKER_DRIVER: overlay2 DOCKER_TLS_CERTDIR: "/certs" IMAGE_NAME: image_name services: - docker:19.03-dind stages: - build publish: stage: build script: - docker build -t$IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10) . - docker push $IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10) only: - master And this my config.toml: [[runners]] name = MY_RUNNER url = MY_HOST token = MY_TOKEN_RUNNER executor = "docker" [runners.custom_build_dir] [runners.docker] tls_verify = false image = "docker:stable" privileged = true disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = false volumes = ["/certs/client", "/cache"] shm_size = 0 Disable TLS .gitlab-ci.yml: image: docker:18.09 variables: DOCKER_HOST: tcp://localhost:2375/ DOCKER_DRIVER: overlay2 DOCKER_TLS_CERTDIR: "" IMAGE_NAME: image_name services: - docker:18.09-dind stages: - build publish: stage: build script: - docker build -t$IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10) . - docker push $IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10) only: - master And this my config.toml: [[runners]] environment = ["DOCKER_TLS_CERTDIR="] Anyone have idea? Solution You can see at the accepted answer. Moreover, In my case and another one. Looks like the root cause it from the Linux server that GitLab hosted doesn't has permission to connect Docker. Let's check the permission connectivity between GitLab and Docker on your server.
You want to set DOCKER_HOST to tcp://docker:2375. It's a "service", i.e. running in a separate container, by default named after the image name, rather than localhost. Here's a .gitlab-ci.yml snippet that should work: # Build and push the Docker image off of merges to master; based off # of Gitlab CI support in https://pythonspeed.com/products/pythoncontainer/ docker-build: stage: build image: # An alpine-based image with the `docker` CLI installed. name: docker:stable # This will run a Docker daemon in a container (Docker-In-Docker), which will # be available at thedockerhost:2375. If you make e.g. port 5000 public in Docker # (`docker run -p 5000:5000 yourimage`) it will be exposed at thedockerhost:5000. services: - name: docker:dind alias: thedockerhost variables: # Tell docker CLI how to talk to Docker daemon; see # https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-executor DOCKER_HOST: tcp://thedockerhost:2375/ # Use the overlayfs driver for improved performance: DOCKER_DRIVER: overlay2 DOCKER_TLS_CERTDIR: "" script: # Download bash: - apk add --no-cache bash python3 # GitLab has a built-in Docker image registry, whose parameters are set automatically. # See https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#using-the-gitlab-contai # # CHANGEME: You can use some other Docker registry though by changing the # login and image name. - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY" - docker build -t "$CI_REGISTRY_IMAGE" . - docker push "$CI_REGISTRY_IMAGE" # Only build off of master branch: only: - master
GitLab
61,105,333
48
I have a subdirectory named "www" that is a repo: site |-- www/ | |-- .git/ | |-- index.html |-- design/ | |-- images.jpg I'd like to change the repo to the parent directory so that the repo structure mirrors the original file structure as follows: site |-- .git/ |-- www/ | |-- index.html |-- design/ | |-- images.jpg Can this be done? Are there implications with then pushing the changes up to Gitlab?
Create a www directory in your repo. git mv the HTML files to that directory. Optionally commit this step. mv the design directory into your repo, and git add . Commit. Rename/move your whole repo.
GitLab
32,082,366
47
I've read that GitLab is capable of sending messages to other servers via "web hooks" but I can't find where one would create one. Can someone point me in the right direction?
All the answers I've found in official documentation and on Stack Overflow for finding web hooks are incorrect. The admin area > hooks page does NOT contain web hooks. It contains system hooks, which fire when you create/delete projects and users and things like that. This is not what you want. To find your web hooks, go to the specific project > settings > web hooks (on sidebar in GitLab 6.1.0) page. These will fire on post-receive for the project in question. You can use a service like RequestBin to see what the payload looks like and to ensure you're firing these off correctly for debugging purposes.
GitLab
17,157,969
46
I just created a personal GitLab account and am trying to follow the steps on https://gitlab.com/help/ssh/README to deploy my SSH key to GitLab. I've completed up to step 5, and see my SSH key among 'Your SSH keys' in my User Settings -> SSH keys: I'm trying to now complete the optional 6th step, testing the key: My GitLab username is khpeek, so I guessed my 'GitLab domain' is gitlab.com/khpeek. However, the test command ssh -T [email protected]/khpeek yields an error message: ssh: Could not resolve hostname gitlab.com/khpeek: Name or service not known Apparently this is the wrong hostname. What would be the right one?
If you're using Gitlab on gitlab.com then the domain is simply gitlab.com so you should run ssh -T [email protected]
GitLab
43,319,094
46
I am running the following command from my Jenkinsfile. However, I get the error "The input device is not a TTY". docker run -v $PWD:/foobar -it cloudfoundry/cflinuxfs2 /foobar/script.sh Is there a way to run the script from the Jenkinsfile without doing interactive mode? I basically have a file called script.sh that I would like to run inside the Docker container.
Remove the -it from your cli to make it non interactive and remove the TTY. If you don't need either, e.g. running your command inside of a Jenkins or cron script, you should do this. Or you can change it to -i if you have input piped into the docker command that doesn't come from a TTY. If you have something like xyz | docker ... or docker ... <input in your command line, do this. Or you can change it to -t if you want TTY support but don't have it available on the input device. Do this for apps that check for a TTY to enable color formatting of the output in your logs, or for when you later attach to the container with a proper terminal. Or if you need an interactive terminal and aren't running in a terminal on Linux or MacOS, use a different command line interface. PowerShell is reported to include this support on Windows. What is a TTY? It's a terminal interface that supports escape sequences, moving the cursor around, etc, that comes from the old days of dumb terminals attached to mainframes. Today it is provided by the Linux command terminals and ssh interfaces. See the wikipedia article for more details. To see the difference of running a container with and without a TTY, run a container without one: docker run --rm -i ubuntu bash. From inside that container, install vim with apt-get update; apt-get install vim. Note the lack of a prompt. When running vim against a file, try to move the cursor around within the file.
Jenkins
43,099,116
959
I've just started working with Jenkins and have run into a problem. After installing several plugins it said it needs to be restarted and went into a "shutting down" mode, but never restarts. How do I do a manual restart?
To restart Jenkins manually, you can use either of the following commands (by entering their URL in a browser): (jenkins_url)/safeRestart - Allows all running jobs to complete. New jobs will remain in the queue to run after the restart is complete. (jenkins_url)/restart - Forces a restart without waiting for builds to complete.
Jenkins
8,072,700
815
I'm trying to configure my e-mail on Jenkins/Hudson, and I constantly receive the error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty I've seen a good amount of information online about the error, but I have not gotten any to work. I'm using Sun's JDK on Fedora Linux (not OpenJDK). Here are a few things I've tried. I tried following the advice from this post, but copying the cacerts from Windows over to my Fedora box hosting Jenkins didn't work. I tried following this guide as I'm trying to configure Gmail as my SMTP server, but it didn't work either. I also tried to download and move those cacert files manually and move them over to my Java folder using a variation of the commands on this guide. I am open to any suggestions as I'm currently stuck right now. I have gotten it to work from a Windows Hudson server, but I am struggling on Linux.
This bizarre message means that the trust store you specified was: empty, not found, or couldn't be opened, due for example to: wrong/missing trustStorePassword, or file access permissions. See also @AdamPlumb's answer below.
Jenkins
6,784,463
613
I am new to docker. I just tried to use docker in my local machine(Ubuntu 16.04) with Jenkins. I configured a new job with below pipeline script. node { stage('Build') { docker.image('maven:3.3.3').inside { sh 'mvn --version' } } } But it fails with this error: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock
If using jenkins The user jenkins needs to be added to the group docker: sudo usermod -a -G docker jenkins Then restart Jenkins. Otherwise If you arrive to this question of stack overflow because you receive this message from docker, but you don't use jenkins, most probably the error is the same: your unprivileged user does not belong to the docker group. You can do: sudo usermod -a -G docker $USER You can check it was successful by doing grep docker /etc/group and see something like this: docker:x:998:[user] in one of the lines. Then change your users group ID to docker (to avoid having to log out and log in again): newgrp docker
Jenkins
47,854,463
561
I have a project hosted in Git stash (now rebranded as Bitbucket Server). It is built using Jenkins. Now I made a typo while installing my Git locally. Like @ab.example instead of @abc.example After every build, Jenkins sends email notifications and it picks up my incorrect email address from Git commit and tries to send it. Even after I have changed the email address in my local Git, I still see Jenkins sending the emails to the old incorrect address. How can I fix this?
Locally set email-address (separately for each repository) Open Git Bash. Change the current working directory to the local repository in which you want to set your Git config email. Set your email address with the following command: git config user.email "[email protected]" Confirm that you have set your email address correctly with the following command. git config user.email Globally set email-address (only used when nothing is set locally) Open Git Bash. Set your email address with the following command: git config --global user.email "[email protected]" Confirm that you have set your email address: git config --global user.email Or using environment variables [email protected] [email protected] PD: Info from GitHub official guide
Jenkins
37,805,621
407
I added a new job in Jenkins, which I want to schedule periodically. From Configure job, I am checking the "Build Periodically" checkbox and in the Schedule text field added the expression: 15 13 * * * But it does not run at the scheduled time. Is it the correct procedure to schedule a job? The job should run at 4:20 AM, but it is not running.
By setting the schedule period to 15 13 * * * you tell Jenkins to schedule the build every day of every month of every year at the 15th minute of the 13th hour of the day. Jenkins used a cron expression, and the different fields are: MINUTES Minutes in one hour (0-59) HOURS Hours in one day (0-23) DAYMONTH Day in a month (1-31) MONTH Month in a year (1-12) DAYWEEK Day of the week (0-7) where 0 and 7 are sunday If you want to schedule your build every 5 minutes, this will do the job : */5 * * * * If you want to schedule your build every day at 8h00, this will do the job : 0 8 * * * For the past few versions (2014), Jenkins have a new parameter, H (extract from the Jenkins code documentation): To allow periodically scheduled tasks to produce even load on the system, the symbol H (for “hash”) should be used wherever possible. For example, using 0 0 * * * for a dozen daily jobs will cause a large spike at midnight. In contrast, using H H * * * would still execute each job once a day, but not all at the same time, better using limited resources. Note also that: The H symbol can be thought of as a random value over a range, but it actually is a hash of the job name, not a random function, so that the value remains stable for any given project. More example of using 'H'
Jenkins
12,472,645
377
I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers. Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers. Petazzoni lists two reasons why dind is troublesome: It does not cooperate well with Linux Security Modules (LSM). It creates a mismatch in file systems that creates problems for the containers created inside parent containers. From that blog post, he describes the following alternative, [The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag. Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with: docker run -v /var/run/docker.sock:/var/run/docker.sock ... Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
Jenkins
27,879,713
352
Is it possible to exchange jobs between 2 different Jenkins'? I'm searching for a way to export/import jobs.
Probably use jenkins command line is another option, see https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI create-job: Creates a new job by reading stdin as a configuration XML file. get-job: Dumps the job definition XML to stdout So you can do java -jar jenkins-cli.jar -s http://server get-job myjob > myjob.xml java -jar jenkins-cli.jar -s http://server create-job newmyjob < myjob.xml It works fine for me and I am used to store in inside my version control system
Jenkins
8,424,228
330
When writing jenkins pipelines it seems to be very inconvenient to commit each new change in order to see if it works. Is there a way to execute these locally without committing the code?
You cannot execute a Pipeline script locally, since its whole purpose is to script Jenkins. (Which is one reason why it is best to keep your Jenkinsfile short and limited to code which actually deals with Jenkins features; your actual build logic should be handled with external processes or build tools which you invoke via a one-line sh or bat step.) If you want to test a change to Jenkinsfile live but without committing it, use the Replay feature added in 1.14. JENKINS-33925 tracks the feature request for an automated test framework.
Jenkins
36,309,063
314
I would like to be able to do something like: AOEU=$(echo aoeu) and have Jenkins set AOEU=aoeu. The Environment Variables section in Jenkins doesn't do that. Instead, it sets AOEU='$(echo aoeu)'. How can I get Jenkins to evaluate a shell command and assign the output to an environment variable? Eventually, I want to be able to assign the executor of a job to an environment variable that can be passed into or used by other scripts.
This can be done via EnvInject plugin in the following way: Create an "Execute shell" build step that runs: echo AOEU=$(echo aoeu) > propsfile Create an Inject environment variables build step and set "Properties File Path" to propsfile. Note: This plugin is (mostly) not compatible with the Pipeline plugin.
Jenkins
10,625,259
281
Is it possible to turn off sonar (www.sonarsource.org) measurements for specific blocks of code, which one doesn't want to be measured? An example is the "Preserve Stack Trace" warning which Findbugs outputs. When leaving the server, I might well want to only pass the message back to the client, not including the actual exception which I just caught, if that exception is unknown to the client (because the client doesn't have the JAR in which that exception was contained for example).
You can annotate a class or a method with SuppressWarnings @java.lang.SuppressWarnings("squid:S00112") squid:S00112 in this case is a Sonar issue ID. You can find this ID in the Sonar UI. Go to Issues Drilldown. Find an issue you want to suppress warnings on. In the red issue box in your code is there a Rule link with a definition of a given issue. Once you click that you will see the ID at the top of the page.
Jenkins
10,971,968
280
Our Jenkins server has a job that has been running for three days, but is not doing anything. Clicking the little X in the corner does nothing, and the console output log doesn't show anything either. I've checked on our build servers and the job doesn't actually seem to be running at all. Is there a way to tell jenkins that the job is "done", by editing some file or lock or something? Since we have a lot of jobs we don't really want to restart the server.
I had also the same problem and fix it via Jenkins Console. Go to "Manage Jenkins" > "Script Console" and run a script: Jenkins.instance.getItemByFullName("JobName") .getBuildByNumber(JobNumber) .finish(hudson.model.Result.ABORTED, new java.io.IOException("Aborting build") ); You'll have just specify your JobName and JobNumber.
Jenkins
14,456,592
264
Are comments possible in a Jenkinsfile? If so, what's the syntax? I am using the declarative pipeline syntax. I want to comment out the "post" section below until my SMTP server is working. pipeline { agent { label 'docker-build-slave' } environment { IMAGE = 'registry.gitlab.com/XXXXX/bible-server' DOCKER_REGISTRY_CREDENTIALS = credentials('DOCKER_REGISTRY_CREDENTIALS') } options { timeout(10) } stages { stage('Test') { steps { sh 'yarn' sh 'npm test' } } stage('Build') { when { branch '*/master' } steps { sh 'docker login -u ${DOCKER_REGISTRY_CREDENTIALS_USR} -p ${DOCKER_REGISTRY_CREDENTIALS_PSW} registry.gitlab.com' sh 'docker build -t ${IMAGE}:${BRANCH_NAME} .' sh 'docker push ${IMAGE}:${BRANCH_NAME}' } } stage('Deploy') { when { branch '*/master' } steps { echo 'Deploying ..' } } } post { success { mail to: "[email protected]", subject:"SUCCESS: ${currentBuild.fullDisplayName}", body: "Yay, we passed." } failure { mail to: "[email protected]", subject:"FAILURE: ${currentBuild.fullDisplayName}", body: "Boo, we failed." } } }
The Jenkinsfile is written in groovy which uses the Java (and C) form of comments: /* this is a multi-line comment */ // this is a single line comment
Jenkins
42,309,957
250
Is there a way to reset all (or just disable the security settings) from the command line without a user/password as I have managed to completely lock myself out of Jenkins?
The simplest solution is to completely disable security - change true to false in /var/lib/jenkins/config.xml file. <useSecurity>true</useSecurity> A one-liner to achieve the same: sed -i 's/<useSecurity>true<\/useSecurity>/<useSecurity>false<\/useSecurity>/g' /var/lib/jenkins/config.xml Then just restart Jenkins: sudo service jenkins restart And then go to admin panel and set everything once again. If you in case are running your Jenkins inside a Kubernetes pod and can not run service command, then you can just restart Jenkins by deleting the pod: kubectl delete pod <jenkins-pod-name> Once the command was issued, Kubernetes will terminate the old pod and start a new one.
Jenkins
6,988,849
247
How can I trigger build of another job from inside the Jenkinsfile? I assume that this job is another repository under the same github organization, one that already has its own Jenkins file. I also want to do this only if the branch name is master, as it doesn't make sense to trigger downstream builds of any local branches. Update: stage 'test-downstream' node { def job = build job: 'some-downtream-job-name' } Still, when executed I get an error No parameterized job named some-downtream-job-name found I am sure that this job exists in jenkins and is under the same organization folder as the current one. It is another job that has its own Jenkinsfile. Please note that this question is specific to the GitHub Organization Plugin which auto-creates and maintains jobs for each repository and branch from your GitHub Organization.
In addition to the above mentioned answers: I wanted to start a job with a simple parameter passed to a second pipeline and found the answer on http://web.archive.org/web/20160209062101/https://dzone.com/refcardz/continuous-delivery-with-jenkins-workflow So i used: stage ('Starting ART job') { build job: 'RunArtInTest', parameters: [[$class: 'StringParameterValue', name: 'systemname', value: systemname]] }
Jenkins
36,306,883
226
Recently Maven build jobs running in Jenkins are failing with the below exception saying that they couldn't pull dependencies from Maven Central and should use HTTPS. I'm not sure how to change the requests from HTTP to HTTPS. Could someone guide me on this matter? [ERROR] Unresolveable build extension: Plugin org.apache.maven.wagon:wagon-ssh:2.1 or one of its dependencies could not be resolved: Failed to collect dependencies for org.apache.maven.wagon:wagon-ssh:jar:2.1 (): Failed to read artifact descriptor for org.apache.maven.wagon:wagon-ssh:jar:2.1: Could not transfer artifact org.apache.maven.wagon:wagon-ssh:pom:2.1 from/to central (http://repo.maven.apache.org/maven2): Failed to transfer file: http://repo.maven.apache.org/maven2/org/apache/maven/wagon/wagon-ssh/2.1/wagon-ssh-2.1.pom. Return code is: 501, ReasonPhrase:HTTPS Required. -> [Help 2] Waiting for Jenkins to finish collecting data[ERROR] Plugin org.apache.maven.plugins:maven-clean-plugin:2.4.1 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-clean-plugin:jar:2.4.1: Could not transfer artifact org.apache.maven.plugins:maven-clean-plugin:pom:2.4.1 from/to central (http://repo.maven.apache.org/maven2): Failed to transfer file: http://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/2.4.1/maven-clean-plugin-2.4.1.pom. Return code is: 501 , ReasonPhrase:HTTPS Required. -> [Help 1]
The reason for the observed error is explained in Central 501 HTTPS Required Effective January 15, 2020, The Central Repository no longer supports insecure communication over plain HTTP and requires that all requests to the repository are encrypted over HTTPS. It looks like latest versions of Maven (tried with 3.6.0, 3.6.1) are already using the HTTPS URL by default. Here are the dates when the major repositories will switch: Your Java builds might break starting January 13th (if you haven't yet switched repo access to HTTPS) Update: Seems like from maven 3.2.3 maven central is accessed via HTTPS See https://stackoverflow.com/a/25411658/5820670 Maven Change log (http://maven.apache.org/docs/3.2.3/release-notes.html)
Jenkins
59,763,531
218
I'm trying to set up Jenkins-ci for a project using GitHub. I've already set up Jenkins with the appropriate plugins. I want Jenkins to run build scripts only whenever someone on the project pushes to master. So far I've been able to set it up so that a build will be triggered anytime anyone pushes to anywhere, but that is too broad. I've done this with post-receive service hooks on Git. I've read the Jenkins wiki, and a couple of tutorials, but this particular detail is missing... is it something to do with polling maybe? Or should work be done on the Git side, so that Git only triggers Jenkins when master is changed?
As already noted by gezzed in his comment, meanwhile there is a good solution (described in Polling must die: triggering Jenkins builds from a Git hook): Set the Jenkins job's build trigger to Poll SCM, but do not specify a schedule. Create a GitHub post-receive trigger to notify the URL http://yourserver/jenkins/git/notifyCommit?url=<URL of the Git repository>?token=<get token from git to build remotely> This will trigger all builds that poll the specified Git repository. However, polling actually checks whether anything has been pushed to the used branch. It works perfectly.
Jenkins
5,784,329
215
I have a problem with jenkins, setting "git", shows the following error: Failed to connect to repository : Command "git ls-remote -h https://[email protected]/person/projectmarket.git HEAD" returned status code 128: stdout: stderr: fatal: Authentication failed I have tested with ssh: [email protected]:person/projectmarket.git This is error: Failed to connect to repository : Command "git ls-remote -h [email protected]:person/projectmarket.git HEAD" returned status code 128: stdout: stderr: Host key verification failed. fatal: The remote end hung up unexpectedly I've also done these steps with "SSH key". Login under Jenkins sudo su jenkins Copy your github key to Jenkins .ssh folder cp ~/.ssh/id_rsa_github* /var/lib/jenkins/.ssh/ Rename the keys mv id_rsa_github id_rsa mv id_rsa_github.pub id_rsa.pub but still not working git repository in jenkins. thanks by help!.
Change to the jenkins user and run the command manually: git ls-remote -h [email protected]:person/projectmarket.git HEAD You will get the standard SSH warning when first connecting to a new host via SSH: The authenticity of host 'bitbucket.org (207.223.240.181)' can't be established. RSA key fingerprint is 97:8c:1b:f2:6f:14:6b:5c:3b:ec:aa:46:46:74:7c:40. Are you sure you want to continue connecting (yes/no)? Type yes and press Enter. The host key for bitbucket.org will now be added to the ~/.ssh/known_hosts file and you won't get this error in Jenkins anymore.
Jenkins
15,174,194
214
Is there a way to show the Jenkins build status on my project's GitHub Readme.md? I use Jenkins to run continuous integration builds. After each commit it ensures that everything compiles, as well as executes unit and integration tests, before finally producing documentation and release bundles. There's still a risk of inadvertently committing something that breaks the build. It would be good for users visiting the GitHub project page to know the current master is in that state.
Ok, here's how you can set up Jenkins to set GitHub build statuses. This assumes you've already got Jenkins with the GitHub plugin configured to do builds on every push. Go to GitHub, log in, go to Settings, Developer Settings, Personal access tokens and click on Generate new token. Check repo:status (I'm not sure this is necessary, but I did it, and it worked for me). Generate the token, copy it. Make sure the GitHub user you're going to use is a repository collaborator (for private repos) or is a member of a team with push and pull access (for organization repos) to the repositories you want to build. Go to your Jenkins server, log in. Manage Jenkins → Configure System Under GitHub Web Hook select Let Jenkins auto-manage hook URLs, then specify your GitHub username and the OAuth token you got in step 3. Verify that it works with the Test Credential button. Save the settings. Find the Jenkins job and add Set build status on GitHub commit to the post-build steps That's it. Now do a test build and go to GitHub repository to see if it worked. Click on Branches in the main repository page to see build statuses. You should see green checkmarks:
Jenkins
14,274,293
210
We have a need to be able to skip a submodule in certain environments. The module in question contains integration tests and takes half an hour to run. So we want to include it when building on the CI server, but when developers build locally (and tests get run), we want to skip that module. Is there a way to do this with a profile setting? I've done some googling and looked at the other questions/answers here and haven't found a good solution. I suppose one option is to remove that submodule from the parent pom.xml entirely, and just add another project on our CI server to just build that module. Suggestions?
Maven version 3.2.1 added this feature, you can use the -pl switch (shortcut for --projects list) with ! or - (source) to exclude certain submodules. mvn -pl '!submodule-to-exclude' install mvn -pl -submodule-to-exclude install Be careful in bash the character ! is a special character, so you either have to single quote it (like I did) or escape it with the backslash character. The syntax to exclude multiple module is the same as the inclusion mvn -pl '!submodule1,!submodule2' install mvn -pl -submodule1,-submodule2 install EDIT Windows does not seem to like the single quotes, but it is necessary in bash ; in Windows, use double quotes (thanks @awilkinson) mvn -pl "!submodule1,!submodule2" install
Jenkins
8,304,110
209
What is the difference between an agent and a node in a jenkins pipeline? I've found those definitions: Node: A Pipeline performs most of the work in the context of one or more declared node steps. Agent: The agent directive specifies where the entire Pipeline, or a specific stage, will execute in the Jenkins environment depending on where the agent directive is placed. So both are used for executing pipeline steps. But when to use which one?
The simple answer is, Agent is for declarative pipelines and node is for scripted pipelines. In declarative pipelines the agent directive is used for specifying which agent/slave the job/task is to be executed on. This directive only allows you to specify where the task is to be executed, which agent, slave, label or docker image. On the other hand, in scripted pipelines the node step can be used for executing a script/step on a specific agent, label, slave. The node step optionally takes the agent or label name and then a closure with code that is to be executed on that node. declarative and scripted pipelines (edit based on the comment): declarative pipelines is a new extension of the pipeline DSL (it is basically a pipeline script with only one step, a pipeline step with arguments (called directives), these directives should follow a specific syntax. The point of this new format is that it is more strict and therefore should be easier for those new to pipelines, allow for graphical editing and much more. scripted pipelines is the fallback for advanced requirements.
Jenkins
42,050,626
199
We are running Jenkins 2.x and love the new Pipeline plugin. However, with so many branches in a repository, disk space fills up quickly. Is there any plugin that's compatible with Pipeline that I can wipe out the workspace on a successful build?
Like @gotgenes pointed out with Jenkins Version. 2.74, the below works, not sure since when, maybe if some one can edit and add the version above cleanWs() With, Jenkins Version 2.16 and the Workspace Cleanup Plugin, that I have, I use step([$class: 'WsCleanup']) to delete the workspace. You can view it by going to JENKINS_URL/job/<any Pipeline project>/pipeline-syntax Then selecting "step: General Build Step" from Sample step and then selecting "Delete workspace when build is done" from Build step
Jenkins
37,468,455
189
I am currently using Jenkins on my development PC. I installed it on my development PC, because I had limited knowledge on this tool; so I tested on it in my development PC. Now, I feel comfortable with Jenkins as my long term "partner" in the build process and would like to "move" this Jenkins to a dedicated server. Before this I have done few builds and have the artifacts archived from each build. In particular, the build number is very important to me for version control. How can I export all the Jenkins information from my current PC to my new server?
Following the Jenkins wiki, you'll have to: Install a fresh Jenkins instance on the new server Be sure the old and the new Jenkins instances are stopped Archive all the content of the JENKINS_HOME of the old Jenkins instance Extract the archive into the new JENKINS_HOME directory Do not forget to change the owner of the new Jenkins files : chown -R jenkins:jenkins $JENKINS_HOME Launch the new Jenkins instance Do not forget to change documentation/links to your new instance of Jenkins :) JENKINS_HOME is by default located in ~/.jenkins on a Linux installation, yet to exactly find where it is located, go on the http://your_jenkins_url/configure page and check the value of the first parameter: Home directory; this is the JENKINS_HOME.
Jenkins
8,724,939
184
We already tried the approaches as listed below: https://github.com/oliverlockwood/jenkinsfile-idea-plugin https://st-g.de/2016/08/jenkins-pipeline-autocompletion-in-intellij After having searched the web for many hours on multiple days, we still haven't found a helpful resource on this matter. Thus, it appears to make sense to ask a new question here. We are developing our Java projects in IntelliJ idea and want to integrate our builds with Jenkins. When we create a Jenkinsfile in Idea, we do not get syntax highlighting or auto completion. Since we are new to Jenkins, those features would be really useful to us. How can we make Idea be more supportive with Jenkinsfiles? If there is no way to get syntax highlighting and auto completion for a Jenkinsfile in IntelliJ IDEA, what other editors would be helpful? Please note: we are working with Java projects, not Groovy projects. We've already tried the plugin https://github.com/oliverlockwood/jenkinsfile-idea-plugin. When the plugin is activated, the Jenkinsfile is recognized as such, but instead of syntax highlighting we get an error message, please see below. pipeline { agent { docker 'maven:3.3.3' } stages { stage('build') { steps { sh 'echo Hello, World!' } } } } IntelliJ IDEA highlights the p of pipeline as error. The error message reads: JenkinsTokenType.COMMENT, JenkinsTokenType.CRLF or JenkinsTokenType.STEP_KEY expected, got 'p' Thanks for any help!
If you want IDEA to recognize a Jenkinsfile as a Groovy file, then you can add the String "Jenkinsfile" as a valid file name pattern (normally contains file endings) for Groovy files. This is supported "out of the box" without requiring any additional Plugin (except the "Groovy" Plugin, but that is already part of IDEA). To do that go to the settings menu, open the "Editor" item and then "File Types". Now select "Groovy" in the upper list and add "Jenkinsfile". You can also use a regex like "Jenkinsfile*" if you want to be more flexible regarding an optional file ending for the Jenkinsfile. The setting should now look like this: Your example now looks like this in IDEA (with the Dracula theme): So IDEA now provides syntax highlighting and auto completion as far as I can tell. It suggests existing function/method names while writing, but I'm not a Groovy developer, thus I can't tell if some suggestions are missing.
Jenkins
47,796,757
182
As part of my build process, I am running a git commit as an execute shell step. However, if there are no changes in the workspace, Jenkins is failing the build. This is because git is returning an error code when there are no changes to commit. I'd like to either abort the build, or just mark it as unstable if this is the case. Any ideas?
To stop further execution when command fails: command || exit 0 To continue execution when command fails: command || true
Jenkins
14,392,349
176
I want to move an existing job from one view to another but I can't find the way. Is the only way to copy the job and delete it from the other view? I would like to have the same name and for my experience Jenkins doesn't handle very well the renaming of jobs.
you can simply do it by editing the view (link "Edit view" on left side) and check/uncheck checkboxes
Jenkins
28,562,666
172
This is probably very simple, but I can't find any hint anywhere. So how one is supposed to do that, in general and specifically on Mac?
These instructions apply if you installed using the official Jenkins Mac installer from http://jenkins-ci.org/ Execute uninstall script from terminal: '/Library/Application Support/Jenkins/Uninstall.command' or use Finder to navigate into that folder and double-click on Uninstall.command. Finally delete last configuration bits which might have been forgotten: sudo rm -rf /var/root/.jenkins ~/.jenkins If the uninstallation script cannot be found (older Jenkins version), use following commands: sudo launchctl unload /Library/LaunchDaemons/org.jenkins-ci.plist sudo rm /Library/LaunchDaemons/org.jenkins-ci.plist sudo rm -rf /Applications/Jenkins "/Library/Application Support/Jenkins" /Library/Documentation/Jenkins and if you want to get rid of all the jobs and builds: sudo rm -rf /Users/Shared/Jenkins and to delete the jenkins user and group (if you chose to use them): sudo dscl . -delete /Users/jenkins sudo dscl . -delete /Groups/jenkins These commands are also invoked by the uninstall script in newer Jenkins versions, and should be executed too: sudo rm -f /etc/newsyslog.d/jenkins.conf pkgutil --pkgs | grep 'org\.jenkins-ci\.' | xargs -n 1 sudo pkgutil --forget
Jenkins
11,608,996
171
I am Using Jenkins 2 for compiling Java Projects, I want to read the version from a pom.xml, I was following this example: https://github.com/jenkinsci/pipeline-plugin/blob/master/TUTORIAL.md The example suggest: It seems that there is some security problem accessing the File System but I can't figure out what it is giving (or why) that problem: I am just doing a little bit different than the example: def version() { String path = pwd(); def matcher = readFile("${path}/pom.xml") =~ '<version>(.+)</version>' return matcher ? matcher[0][1] : null } The Error I am getting when running the 'version' method : org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method groovy.lang.GroovyObject invokeMethod java.lang.String java.lang.Object (org.codehaus.groovy.runtime.GStringImpl call org.codehaus.groovy.runtime.GStringImpl) at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectMethod(StaticWhitelist.java:165) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:117) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:103) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:15) at WorkflowScript.run(WorkflowScript:71) at ___cps.transform___(Native Method) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:55) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:106) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:79) at sun.reflect.GeneratedMethodAccessor408.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:100) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:79) at sun.reflect.GeneratedMethodAccessor408.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:106) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:79) at sun.reflect.GeneratedMethodAccessor408.invoke(Unknown Source) I am using these versions: Plugin Pipeline 2.1 Jenkins 2.2
Quickfix Solution: I had similar issue and I resolved it doing the following Navigate to jenkins > Manage jenkins > In-process Script Approval There was a pending command, which I had to approve. Alternative 1: Disable sandbox As this article explains in depth, groovy scripts are run in sandbox mode by default. This means that a subset of groovy methods are allowed to run without administrator approval. It's also possible to run scripts not in sandbox mode, which implies that the whole script needs to be approved by an administrator at once. This preventing users from approving each line at the time. Running scripts without sandbox can be done by unchecking this checkbox in your project config just below your script: Alternative 2: Disable script security As this article explains it also possible to disable script security completely. First install the permissive script security plugin and after that change your jenkins.xml file add this argument: -Dpermissive-script-security.enabled=true So you jenkins.xml will look something like this: <executable>..bin\java</executable> <arguments>-Dpermissive-script-security.enabled=true -Xrs -Xmx4096m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -jar "%BASE%\jenkins.war" --httpPort=80 --webroot="%BASE%\war"</arguments> Make sure you know what you are doing if you implement this!
Jenkins
38,276,341
161
How do you run a build step/stage only if building a specific branch? For example, run a deployment step only if the branch is called deployment, leaving everything else the same.
Doing the same in declarative pipeline syntax, below are few examples: stage('master-branch-stuff') { when { branch 'master' } steps { echo 'run this stage - ony if the branch = master branch' } } stage('feature-branch-stuff') { when { branch 'feature/*' } steps { echo 'run this stage - only if the branch name started with feature/' } } stage('expression-branch') { when { expression { return env.BRANCH_NAME != 'master'; } } steps { echo 'run this stage - when branch is not equal to master' } } stage('env-specific-stuff') { when { environment name: 'NAME', value: 'this' } steps { echo 'run this stage - only if the env name and value matches' } } More effective ways coming up - https://issues.jenkins-ci.org/browse/JENKINS-41187 Also look at - https://jenkins.io/doc/book/pipeline/syntax/#when The directive beforeAgent true can be set to avoid spinning up an agent to run the conditional, if the conditional doesn't require git state to decide whether to run: when { beforeAgent true; expression { return isStageConfigured(config) } } Release post and docs UPDATE New WHEN Clause REF: https://jenkins.io/blog/2018/04/09/whats-in-declarative equals - Compares two values - strings, variables, numbers, booleans - and returns true if they’re equal. I’m honestly not sure how we missed adding this earlier! You can do "not equals" comparisons using the not { equals ... } combination too. changeRequest - In its simplest form, this will return true if this Pipeline is building a change request, such as a GitHub pull request. You can also do more detailed checks against the change request, allowing you to ask "is this a change request against the master branch?" and much more. buildingTag - A simple condition that just checks if the Pipeline is running against a tag in SCM, rather than a branch or a specific commit reference. tag - A more detailed equivalent of buildingTag, allowing you to check against the tag name itself.
Jenkins
37,690,920
161
Our automated build is running on Jenkins. The build itself is running on slaves, with the slaves being executed via SSH. I get an error: 00:03:25.113 [codesign-app] build/App.app: User interaction is not allowed. I have tried every suggestion I have seen so far in other posts here: Using security unlock-keychain immediately before signing to unlock the keychain. Moving the signing key out into its own keychain. Moving the signing key into the login keychain. Moving the signing key into the system keychain. Manually setting list-keychains to only the keychain which contains the key. In all cases, I get the same error. In an attempt to diagnose the issue, I tried running the "security unlock-keychain" command on my local terminal and found that it doesn't actually unlock the keychain - if I look in Keychain Access, the lock symbol is still there. This is the case whether I pass the password on the command-line or whether I let it prompt me for it. Unlocking the same keychain using the GUI will prompt me for the password and then unlock it. Additionally, if I run "security lock-keychain", I do see the key lock immediately after running the command. This makes me think that unlock-keychain doesn't actually work. I experience the same behaviour on Lion (which we're using for the build slaves) and Mavericks (which I'm developing on.) Next, I tried adding -v to all the security commands: list-keychains "-d" "system" "-s" "/Users/tester/.secret/App.keychain" Listing keychains to see if it was added: (( "/Library/Keychains/System.keychain" )) unlock-keychain "-p" "**PASSWORD**" "/Users/tester/.secret/App.keychain" build/App.app: User interaction is not allowed. From this, it would seem that list-keychains is what isn't working. Maybe neither work. :/ There is a similar question here. The solution is interesting - set "SessionCreate" to true in launchctl. But I'm not building on the master - my build process is started from SSH on a slave build machine. Maybe there is a command-line way to do what launchctl is doing when you run "SessionCreate"?
I too have been fighting this. Nothing helped until I tried the suggestion on http://devnet.jetbrains.com/thread/311971. Thanks ashish agrawal! Login your build user via the GUI and open Keychain Access. Select your signing private key, right-click, choose Get Info, change to the Access Control tab and select the "Allow all applications to access this item".
Jenkins
20,205,162
161
How do you get Jenkins to execute python unittest cases? Is it possible to JUnit style XML output from the builtin unittest package?
sample tests: tests.py: # tests.py import random try: import unittest2 as unittest except ImportError: import unittest class SimpleTest(unittest.TestCase): @unittest.skip("demonstrating skipping") def test_skipped(self): self.fail("shouldn't happen") def test_pass(self): self.assertEqual(10, 7 + 3) def test_fail(self): self.assertEqual(11, 7 + 3) JUnit with pytest run the tests with: py.test --junitxml results.xml tests.py results.xml: <?xml version="1.0" encoding="utf-8"?> <testsuite errors="0" failures="1" name="pytest" skips="1" tests="2" time="0.097"> <testcase classname="tests.SimpleTest" name="test_fail" time="0.000301837921143"> <failure message="test failure">self = &lt;tests.SimpleTest testMethod=test_fail&gt; def test_fail(self): &gt; self.assertEqual(11, 7 + 3) E AssertionError: 11 != 10 tests.py:16: AssertionError</failure> </testcase> <testcase classname="tests.SimpleTest" name="test_pass" time="0.000109910964966"/> <testcase classname="tests.SimpleTest" name="test_skipped" time="0.000164031982422"> <skipped message="demonstrating skipping" type="pytest.skip">/home/damien/test-env/lib/python2.6/site-packages/_pytest/unittest.py:119: Skipped: demonstrating skipping</skipped> </testcase> </testsuite> JUnit with nose run the tests with: nosetests --with-xunit nosetests.xml: <?xml version="1.0" encoding="UTF-8"?> <testsuite name="nosetests" tests="3" errors="0" failures="1" skip="1"> <testcase classname="tests.SimpleTest" name="test_fail" time="0.000"> <failure type="exceptions.AssertionError" message="11 != 10"> <![CDATA[Traceback (most recent call last): File "/opt/python-2.6.1/lib/python2.6/site-packages/unittest2-0.5.1-py2.6.egg/unittest2/case.py", line 340, in run testMethod() File "/home/damien/tests.py", line 16, in test_fail self.assertEqual(11, 7 + 3) File "/opt/python-2.6.1/lib/python2.6/site-packages/unittest2-0.5.1-py2.6.egg/unittest2/case.py", line 521, in assertEqual assertion_func(first, second, msg=msg) File "/opt/python-2.6.1/lib/python2.6/site-packages/unittest2-0.5.1-py2.6.egg/unittest2/case.py", line 514, in _baseAssertEqual raise self.failureException(msg) AssertionError: 11 != 10 ]]> </failure> </testcase> <testcase classname="tests.SimpleTest" name="test_pass" time="0.000"></testcase> <testcase classname="tests.SimpleTest" name="test_skipped" time="0.000"> <skipped type="nose.plugins.skip.SkipTest" message="demonstrating skipping"> <![CDATA[SkipTest: demonstrating skipping ]]> </skipped> </testcase> </testsuite> JUnit with nose2 You would need to use the nose2.plugins.junitxml plugin. You can configure nose2 with a config file like you would normally do, or with the --plugin command-line option. run the tests with: nose2 --plugin nose2.plugins.junitxml --junit-xml tests nose2-junit.xml: <testsuite errors="0" failures="1" name="nose2-junit" skips="1" tests="3" time="0.001"> <testcase classname="tests.SimpleTest" name="test_fail" time="0.000126"> <failure message="test failure">Traceback (most recent call last): File "/Users/damien/Work/test2/tests.py", line 18, in test_fail self.assertEqual(11, 7 + 3) AssertionError: 11 != 10 </failure> </testcase> <testcase classname="tests.SimpleTest" name="test_pass" time="0.000095" /> <testcase classname="tests.SimpleTest" name="test_skipped" time="0.000058"> <skipped /> </testcase> </testsuite> JUnit with unittest-xml-reporting Append the following to tests.py if __name__ == '__main__': import xmlrunner unittest.main(testRunner=xmlrunner.XMLTestRunner(output='test-reports')) run the tests with: python tests.py test-reports/TEST-SimpleTest-20131001140629.xml: <?xml version="1.0" ?> <testsuite errors="1" failures="0" name="SimpleTest-20131001140629" tests="3" time="0.000"> <testcase classname="SimpleTest" name="test_pass" time="0.000"/> <testcase classname="SimpleTest" name="test_fail" time="0.000"> <error message="11 != 10" type="AssertionError"> <![CDATA[Traceback (most recent call last): File "tests.py", line 16, in test_fail self.assertEqual(11, 7 + 3) AssertionError: 11 != 10 ]]> </error> </testcase> <testcase classname="SimpleTest" name="test_skipped" time="0.000"> <skipped message="demonstrating skipping" type="skip"/> </testcase> <system-out> <![CDATA[]]> </system-out> <system-err> <![CDATA[]]> </system-err> </testsuite>
Jenkins
11,241,781
160
I was following this tutorial: node { git url: 'https://github.com/joe_user/simple-maven-project-with-tests.git' ... } However it doesn't tell how to add credentials. Jenkins does have specific "Credentials" section where you define user user&pass, and then get ID for that to use in jobs, but how do I use that in Pipeline instructions? I tried with: git([url: '[email protected]:company/repo.git', branch: 'master', credentialsId: '12345-1234-4696-af25-123455']) no luck: stderr: Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Is there a way configure the creds in pipeline, or do I have to put SSH-keys to Jenkin's Linux user's .ssh/authorized_keys file? In ideal world I'd like to have a repository for pipeline jobs and repo-keys, then launch Docker Jenkins, and dynamically add these jobs and keys there without having to configure anything in Jenkins Console.
You can use the following in a pipeline: git branch: 'master', credentialsId: '12345-1234-4696-af25-123455', url: 'ssh://[email protected]:company/repo.git' If you're using the ssh url then your credentials must be username + private key. If you're using the https clone url instead of the ssh one, then your credentials should be username + password.
Jenkins
38,461,705
156
Using Jenkins 1.501 and Jenkins Git plugin 1.1.26 I have 3 different git repos each with multiple projects. Now I need to checkout all projects from the 3 git repos into the same workspace on a Jenkins slave. I have defined each git repo in: Source code Management: Multiple SCMs. But each time a repo is checked out the previous repo (and its associated projects) is deleted. I have read this: http://jenkins.361315.n4.nabble.com/multiple-git-repos-in-one-job-td4633300.html but its does not really help. I have tried to specify the same folder under Local subdirectory for repo (optional) for all repos but it gives the same result. If this is simply impossible using Jenkins I guess some pre-build step/scripting could be used to move the projects into the right location. Its not an option to modify the build configuration of the projects.
With the Multiple SCMs Plugin: create a different repository entry for each repository you need to checkout (main project or dependancy project. for each project, in the "advanced" menu (the second "advanced" menu, there are two buttons labeled "advanced" for each repository), find the "Local subdirectory for repo (optional)" textfield. You can specify there the subdirectory in the "workspace" directory where you want to copy the project to. You could map the filesystem of my development computer. The "second advanced menu" doesn't exist anymore, instead what needs to be done is use the "Add" button (on the "Additional Behaviours" section), and choose "Check out to a sub-directory" if you are using ant, as now the build.xml file with the build targets in not in the root directory of the workspace but in a subdirectory, you have to reflect that in the "Invoke Ant" configuration. To do that, in "Invoke ant", press "Advanced" and fill the "Build file" input text, including the name of the subdirectory where the build.xml is located.
Jenkins
14,843,696
153
I have two Jenkins instances running. An old (legacy) one at version 1.614 and a new one with 1.633. In the old one it is possible to use HTML in the job description (it even does syntax highlighting editing it). The new one doesn't. HTML content is escaped and shown as plain text. I could not find a change in the release notes explaining this behavior. Is there a configuration that I'm missing?
In the Global security menu: Select this value to display HTML:
Jenkins
33,821,217
149
I am new to Hudson / Jenkins and was wondering if there is a way to check in Hudson's configuration files to source control. Ideally I want to be able to click some button in the UI that says 'save configuration' and have the Hudson configuration files checked in to source control.
Most helpful Answer There is a plugin called SCM Sync configuration plugin. Original Answer Have a look at my answer to a similar question. The basic idea is to use the filesystem-scm-plugin to detect changes to the xml-files. Your second part would be committing the changes to SVN. EDIT: If you find a way to determine the user for a change, let us know. EDIT 2011-01-10 Meanwhile there is a new plugin: SCM Sync configuration plugin. Currently it only works with subversion and git, but support for more repositories is planned. I am using it since version 0.0.3 and it worked good so far.
Jenkins
2,087,142
148
I have jenkins.war and I started it from command prompt in Windows as: java -jar jenkins.war It was started well and easily browsed as http://localhost:8080 I want to start on 9090 port. How can I do that?
Use the following command at command prompt: java -jar jenkins.war --httpPort=9090 If you want to use https use the following command: java -jar jenkins.war --httpsPort=9090 Details are here
Jenkins
15,265,277
146
For running an ASP.NET Core application, I generated a dockerfile which build the application and copys the source code in the container, which is fetched by Git using Jenkins. So in my workspace, I do the following in the dockerfile: WORKDIR /app COPY src src While Jenkins updates the files on my host correctly with Git, Docker doesn't apply this to my image. My basic script for building: #!/bin/bash imageName=xx:my-image containerName=my-container docker build -t $imageName -f Dockerfile . containerRunning=$(docker inspect --format="{{ .State.Running }}" $containerName 2> /dev/null) if [ "$containerRunning" == "true" ]; then docker stop $containerName docker start $containerName else docker run -d -p 5000:5000 --name $containerName $imageName fi I tried different things like --rm and --no-cache parameter for docker run and also stopping/removing the container before the new one is build. I'm not sure what I'm doing wrong here. It seems that docker is updating the image correctly, as the call of COPY src src would result in a layer id and no cache call: Step 6 : COPY src src ---> 382ef210d8fd What is the recommended way to update a container? My typical scenario would be: The application is running on the server in a Docker container. Now parts of the app are updated, e.g. by modifying a file. Now the container should run the new version. Docker seems to recommend building a new image instead of modifying a existing container, so I think the general way of rebuilding like I do is right, but some detail in the implementation has to be improved.
Video with visual explanation (from 2022) Since I got a lot of positive feedback to my previous, first visual explanation, I decided to create another video for this question and answer since there are some things which can be visualized better in a graphical video. It visualizes and also updates this answer with the knowledge and experience which I got in the last years using Docker on multiple systems (and also K8s). While this question was asked in the context of ASP.NET Core, it is not really related to this framework. The problem was a lack of basic understanding of Docker concepts, so it can happen with nearly every application and framework. For that reason, I used a simple Nginx webserver here since I think many of you are familiar with web servers, but not everyone knows how specific frameworks like ASP.NET Core works. The underlying problem is to understand the difference between containers vs images and how they are different in their lifecycle, which is the basic topic of this video. Textual answer (Originally from 2016) After some research and testing, I found that I had some misunderstandings about the lifetime of Docker containers. Simply restarting a container doesn't make Docker use a new image, when the image was rebuilt in the meantime. Instead, Docker is fetching the image only before creating the container. So the state after running a container is persistent. Why removing is required Therefore, rebuilding and restarting isn't enough. I thought containers work like a service: Stop the service, do your changes, restart it and they would apply. That was my biggest mistake. Because containers are permanent, you have to remove them using docker rm <ContainerName> first. After a container is removed, you can't simply start it by docker start. This has to be done using docker run, which itself uses the latest image for creating a new container instance. Containers should be as independent as possible With this knowledge, it's comprehensible why storing data in containers is considered a bad practice and Docker recommends data volumes/mounting host directories instead: Since a container has to be destroyed to update applications, the stored data inside would be lost too. This causes extra work to shutdown services, back up data and so on. So it's a smart solution to exclude those data completely from the container: We don't have to worry about our data, when it's stored safely on the host and the container only holds the application itself. Why -rf may not really help you The docker run command, has a clean up switch called -rf. It will stop the behavior of keeping Docker containers permanently. Using -rf, Docker will destroy the container after it has been exited. But this switch has a problem: Docker also removes the volumes without a name associated with the container, which may kill your data. While the -rf switch is a good option to save work during development for quick tests, it's less suitable in production. Especially because of the missing option to run a container in the background, which would mostly be required. How to remove a container We can bypass those limitations by simply removing the container: docker rm --force <ContainerName> The --force (or -f) switch which uses SIGKILL on running containers. Instead, you could also stop the container before: docker stop <ContainerName> docker rm <ContainerName> Both are equal. docker stop also uses SIGTERM. But using the --force switch will shorten your script, especially when using CI servers: docker stop throws an error if the container is not running. This would cause Jenkins and many other CI servers to consider the build wrongly as failed. To fix this, you have to check first if the container is running as I did in the question (see containerRunning variable). There is a better way (Added 2016) While plain docker commands like docker build, docker run and others are a good way for beginners to understand basic concepts, it's getting annoying when you're already familiar with Docker and want to get productive. A better way is to use Docker-Compose. While it's designed for multi-container environments, it also gives you benefits when using standalone with a single container. Although multi-container environments aren't really uncommon. Nearly every application has at least an application server and some database. Some even more like caching servers, cron containers or other things. version: "2.4" services: my-container: build: . ports: - "5000:5000" Now you can just use docker-compose up --build and compose will take care of all the steps which I did manually. I'd prefer this one over the script with plain Docker commands, which I added as an answer from 2016. It still works, but is more complex and it won't handle certain situations as well as docker-compose would. For example, compose checks if everything is up to date and only rebuilds those things which need to be rebuilt because of changes. Especially when you're using multiple containers, compose offers way more benefits. For example, linking the containers which requires creating/maintaining networks manually otherwise. You can also specify dependencies so that a database container is started before the application server, which depends on the DB at startup. In the past with Docker-Compose 1.x I noticed some issues, especially with caching. This results in containers not being updated, even when something had changed. I have tested compose v2 for some time now without seeing any of those issues again, so it seems to be fixed now. Full script for rebuilding a Docker container (original answer from 2016) According to this new knowledge, I fixed my script in the following way: #!/bin/bash imageName=xx:my-image containerName=my-container docker build -t $imageName -f Dockerfile . echo Delete old container... docker rm -f $containerName echo Run new container... docker run -d -p 5000:5000 --name $containerName $imageName This works perfectly :)
Jenkins
41,322,541
145
I'm trying to create a declarative Jenkins pipeline script but having issues with simple variable declaration. Here is my script: pipeline { agent none stages { stage("first") { def foo = "foo" // fails with "WorkflowScript: 5: Expected a step @ line 5, column 13." sh "echo ${foo}" } } } However, I get this error: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: WorkflowScript: 5: Expected a step @ line 5, column 13. def foo = "foo" ^ I'm on Jenkins 2.7.4 and Pipeline 2.4.
The Declarative model for Jenkins Pipelines has a restricted subset of syntax that it allows in the stage blocks - see the syntax guide for more info. You can bypass that restriction by wrapping your steps in a script { ... } block, but as a result, you'll lose validation of syntax, parameters, etc within the script block.
Jenkins
39,832,862
144
In 2011 situation with Hudson and Jenkins was following(IMHO) - Hudson was a little bit stable, but development of Jenkins was a little bit faster. What is the situation with "Hudson vs Jenkins" now in 2012?
I have used both Hudson and Jenkins. I have been following both change lists. I still think we made the right choice by moving from Hudson to Jenkins. The Hudson core developers are now working on Jenkins. Those who are still employed by Oracle are the ones mainly supporting Hudson (as far as I am aware the Apache Maven people are contributing fixes as well). I've filed a number of bugs back in the Hudson era. I can tell you most of them were resolved in Jenkins. Many months after their resolution, the Hudson people fixed or asked for further input on those particular bugs. The majority of the plugin developers (almost all, that is) have migrated their plugins to Jenkins and now support Jenkins mainly. In terms of plugins Jenkins is developing much, much faster. There are now some paid plugins provided by Cloudbees. As far as I am aware, the open source community has moved in it's majority to Jenkins. Some companies who prefer to have paid support and don't want the hassle of migrating to Jenkins are still using Hudson. Frankly, I don't see why. Jenkins has commercial support too from Cloudbees, which is where Kohsuke Kawaguchi (the creator of Hudson) now works. Cloudbees now even have a free service for hosting GitHub hosted projects in their cloud. They let your OSS projects build for free! :) Jenkins has improved it's support for the cloud. As mentioned above, Cloudbees also provide this SaaS in the cloud. I am not sure if and to what extent Hudson supports this. I think they're not so advanced at the moment; whatever the case, Hudson doesn't provide a SaaS for the cloud, as far as I am aware. My opinion is that if you have to pick one, it should be Jenkins.
Jenkins
11,433,083
144
I am having issues getting Jenkins to build a specified tag. The tag is part of a parametrized build, but I do not know how to pass this through to the git plugin to just build that tag. This has been taking 3 hours of my day and I have conceded defeat to the masters at Stack Overflow.
I was able to do that by using the "branches to build" parameter: Branch Specifier (blank for default): tags/[tag-name] Replace [tag-name] by the name of your tag.
Jenkins
10,195,900
142
I'd like for Jenkins to automagically fetch data from my private repository hosted on Github. But I have no idea how to accomplish that task.. Tried the documentation, generating ssh-key for jenkins user and all what I can see is: "unable to clone the repo". I've checked URLs - they are valid. Any clues, maybe you know some docs/blogs/whatever which are describing this kind of stuff?
Perhaps GitHub's support for deploy keys is what you're looking for? To quote that page: When should I use a deploy key? Simple, when you have a server that needs pull access to a single private repo. This key is attached directly to the repository instead of to a personal user account. If that's what you're already trying and it doesn't work, you might want to update your question with more details of the URLs being used, the names and location of the key files, etc. Now for the technical part: How to use your SSH key with Jenkins? If you have, say, a jenkins unix user, you can store your deploy key in ~/.ssh/id_rsa. When Jenkins tries to clone the repo via ssh, it will try to use that key. In some setups, you cannot run Jenkins as an own user account, and possibly also cannot use the default ssh key location ~/.ssh/id_rsa. In such cases, you can create a key in a different location, e.g. ~/.ssh/deploy_key, and configure ssh to use that with an entry in ~/.ssh/config: Host github-deploy-myproject HostName github.com User git IdentityFile ~/.ssh/deploy_key IdentitiesOnly yes Because all you authenticate to all Github repositories using [email protected] and you don't want the above key to be used for all your connections to Github, we created a host alias github-deploy-myproject. Your clone URL now becomes git clone github-deploy-myproject:myuser/myproject and that is also what you put as repository URL into Jenkins. (Note that you must not put ssh:// in front in order for this to work.)
Jenkins
5,212,304
141
What is the difference between Jenkins and other CI like GitLab CI, drone.io coming with the Git distribution. On some research I could only come up that GitLab community edition doesn't allow Jenkins to be added, but GitLab enterprise edition does. Are there any other significant differences?
This is my experience: At my work we manage our repositories with GitLab EE and we have a Jenkins server (1.6) running. In the basis they do pretty much the same. They will run some scripts on a server/Docker image. TL;DR; Jenkins is easier to use/learn, but it has the risk to become a plugin hell Jenkins has a GUI (this can be preferred if it has to be accessible/maintainable by other people) Integration with GitLab is less than with GitLab CI Jenkins can be split off your repository Most CI servers are pretty straight forward (concourse.ci, gitlab-ci, circle-ci, travis-ci, drone.io, gocd and what else have you). They allow you to execute shell/bat scripts from a YAML file definition. Jenkins is much more pluggable, and comes with a UI. This can be either an advantage or disadvantage, depending on your needs. Jenkins is very configurable because of all the plugins that are available. The downside of this is that your CI server can become a spaghetti of plugins. In my opinion chaining and orchestrating of jobs in Jenkins is much simpler (because of the UI) than via YAML (calling curl commands). Besides that Jenkins supports plugins that will install certain binaries when they are not available on your server (don't know about that for the others). Nowadays (Jenkins 2 also supports more "proper ci" with the Jenkinsfile and the pipline plugin which comes default as from Jenkins 2), but used to be less coupled to the repository than i.e. GitLab CI. Using YAML files to define your build pipeline (and in the end running pure shell/bat) is cleaner. The plug-ins available for Jenkins allow you to visualize all kinds of reporting, such as test results, coverage and other static analyzers. Of course, you can always write or use a tool to do this for you, but it is definitely a plus for Jenkins (especially for managers who tend to value these reports too much). Lately I have been working more and more with GitLab CI. At GitLab they are doing a really great job making the whole experience fun. I understand that people use Jenkins, but when you have GitLab running and available it is really easy to get started with GitLab CI. There won't be anything that will integrate as seamlessly as GitLab CI, even though they put quite some effort in third-party integrations. Their documentation should get you started in no time. The threshold to get started is very low. Maintenance is easy (no plugins). Scaling runners is simple. CI fully part of your repository. Jenkins jobs/views can get messy. Some perks at the time of writing: Only support for a single file in the community edition. Multiples files in the enterprise edition.
Jenkins
37,429,453
140
Could someone please explain to me the idea of artifacts in the build process? I have the workspace directory where I check out the code to compile and run my ant scripts etc. At the end, in my case, I get a jar file that's ready to install. Is that considered to be the artifact? Where should I tell my build script to put the jar file? In the workspace directory? My jar file gets a unique filename depending on variables like BUILD_ID and such, how can I tell Jenkins which jar file to pick? EDIT: Okay, so i tried doing something like this: The path does not exist yet in my workspace, because the build script is supposed to create it, and of course, the .jar and .properties files are not there because they haven't been generated yet. Why does it give me an error then? Seems like I'm missing something. Also, does Jenkins delete the artifacts after each build (not the archived artifacts, I know I can tell it to delete those)? Otherwise it will clog the hard drive pretty quickly.
Your understanding is correct, an artifact in the Jenkins sense is the result of a build - the intended output of the build process. A common convention is to put the result of a build into a build, target or bin directory. The Jenkins archiver can use globs (target/*.jar) to easily pick up the right file even if you have a unique name per build.
Jenkins
5,821,545
137
I want to increase the available heap space for Jenkins. But as it is installed as a service I don´t know how to do it.
If you used Aptitude (apt-get) to install Jenkins on Ubuntu 12.04, uncomment the JAVA_ARGS line in the top few lines of /etc/default/jenkins: # arguments to pass to java #JAVA_ARGS="-Xmx256m" # <--default value JAVA_ARGS="-Xmx2048m" #JAVA_ARGS="-Djava.net.preferIPv4Stack=true" # make jenkins listen on IPv4 address
Jenkins
5,936,519
136
How to trigger a build remotely from Jenkins? How to configure Git post commit hook? My requirement is whenever changes are made in the Git repository for a particular project it will automatically start Jenkins build for that project. In Jenkins trigger build section I selected trigger build remotely. In .git directory, hooks directory is there in that we have to configure post commit file. I am confusing how to trigger a build from there (I know some part we should use curl command). curl cmbuild.aln.com/jenkins/view/project name/job/myproject/buildwithparameters?Branch=feat-con I have placed this command in my git server hooks directory (post commit hook). Whenever the changes happen in repository it is running automate build. I want to check in changeset whether in at least one java file is there the build should start. Suppose the developers changed only xml files or property files the build should not start. Along with xml, suppose the .java files is there the build should start.
As mentioned in "Polling must die: triggering Jenkins builds from a git hook", you can notify Jenkins of a new commit: With the latest Git plugin 1.1.14 (that I just release now), you can now do this more >easily by simply executing the following command: curl http://yourserver/jenkins/git/notifyCommit?url=<URL of the Git repository> This will scan all the jobs that’s configured to check out the specified URL, and if they are also configured with polling, it’ll immediately trigger the polling (and if that finds a change worth a build, a build will be triggered in turn.) This allows a script to remain the same when jobs come and go in Jenkins. Or if you have multiple repositories under a single repository host application (such as Gitosis), you can share a single post-receive hook script with all the repositories. Finally, this URL doesn’t require authentication even for secured Jenkins, because the server doesn’t directly use anything that the client is sending. It runs polling to verify that there is a change, before it actually starts a build. As mentioned here, make sure to use the right address for your Jenkins server: since we're running Jenkins as standalone Webserver on port 8080 the URL should have been without the /jenkins, like this: http://jenkins:8080/git/notifyCommit?url=git@gitserver:tools/common.git To reinforce that last point, ptha adds in the comments: It may be obvious, but I had issues with: curl http://yourserver/jenkins/git/notifyCommit?url=<URL of the Git repository>. The url parameter should match exactly what you have in Repository URL of your Jenkins job. When copying examples I left out the protocol, in our case ssh://, and it didn't work. You can also use a simple post-receive hook like in "Push based builds using Jenkins and GIT" #!/bin/bash /usr/bin/curl --user USERNAME:PASS -s \ http://jenkinsci/job/PROJECTNAME/build?token=1qaz2wsx Configure your Jenkins job to be able to “Trigger builds remotely” and use an authentication token (1qaz2wsx in this example). However, this is a project-specific script, and the author mentions a way to generalize it. The first solution is easier as it doesn't depend on authentication or a specific project. I want to check in change set whether at least one java file is there the build should start. Suppose the developers changed only XML files or property files, then the build should not start. Basically, your build script can: put a 'build' notes (see git notes) on the first call on the subsequent calls, grab the list of commits between HEAD of your branch candidate for build and the commit referenced by the git notes 'build' (git show refs/notes/build): git diff --name-only SHA_build HEAD. your script can parse that list and decide if it needs to go on with the build. in any case, create/move your git notes 'build' to HEAD. May 2016: cwhsu points out in the comments the following possible url: you could just use curl --user USER:PWD http://JENKINS_SERVER/job/JOB_NAME/build?token=YOUR_TOKEN if you set trigger config in your item June 2016, polaretto points out in the comments: I wanted to add that with just a little of shell scripting you can avoid manual url configuration, especially if you have many repositories under a common directory. For example I used these parameter expansions to get the repo name repository=${PWD%/hooks}; repository=${repository##*/} and then use it like: curl $JENKINS_URL/git/notifyCommit?url=$GIT_URL/$repository
Jenkins
12,794,568
136
How do I pass variables between stages in a declarative pipeline? In a scripted pipeline, I gather the procedure is to write to a temporary file, then read the file into a variable. How do I do this in a declarative pipeline? E.g. I want to trigger a build of a different job, based on a variable created by a shell action. stage("stage 1") { steps { sh "do_something > var.txt" // I want to get var.txt into VAR } } stage("stage 2") { steps { build job: "job2", parameters[string(name: "var", value: "${VAR})] } }
If you want to use a file (since a script is the thing generating the value you need), you could use readFile as seen below. If not, use sh with the script option as seen below: // Define a groovy local variable, myVar. // A global variable without the def, like myVar = 'initial_value', // was required for me in older versions of jenkins. Your mileage // may vary. Defining the variable here maybe adds a bit of clarity, // showing that it is intended to be used across multiple stages. def myVar = 'initial_value' pipeline { agent { label 'docker' } stages { stage('one') { steps { echo "1.1. ${myVar}" // prints '1.1. initial_value' sh 'echo hotness > myfile.txt' script { // OPTION 1: set variable by reading from file. // FYI, trim removes leading and trailing whitespace from the string myVar = readFile('myfile.txt').trim() } echo "1.2. ${myVar}" // prints '1.2. hotness' } } stage('two') { steps { echo "2.1 ${myVar}" // prints '2.1. hotness' sh "echo 2.2. sh ${myVar}, Sergio" // prints '2.2. sh hotness, Sergio' } } // this stage is skipped due to the when expression, so nothing is printed stage('three') { when { expression { myVar != 'hotness' } } steps { echo "three: ${myVar}" } } } }
Jenkins
44,099,851
135
I have a strange problem with the Jenkins HTML Publisher plugin, wherein all the fancy CSS I have added to the report is stripped out when viewed in Jenkins. If I download the report to local, I am able to see the CSS formatting. Is there a setting in Jenkins which allows CSS to be viewed? My HTML Publisher Settings in Jenkins: My Report Page when displayed in Jenkins : My Report Page when displayed in Local :
Figured out the issue. Sharing it here for other users. CSS is stripped out because of the Content Security Policy in Jenkins. (https://wiki.jenkins-ci.org/display/JENKINS/Configuring+Content+Security+Policy) The default rule is set to: sandbox; default-src 'none'; img-src 'self'; style-src 'self'; This rule set results in the following: No JavaScript allowed at all No plugins (object/embed) allowed No inline CSS, or CSS from other sites allowed No images from other sites allowed No frames allowed No web fonts allowed No XHR/AJAX allowed, etc. To relax this rule, go to Manage Jenkins-> Manage Nodes-> Click settings(gear icon)-> click Script console on left and type in the following command: System.setProperty("hudson.model.DirectoryBrowserSupport.CSP", "") and Press Run. If you see the output as 'Result:' below "Result" header then the protection disabled. Re-Run your build and you can see that the new HTML files archived will have the CSS enabled.
Jenkins
35,783,964
135
This isn't as simple as just doing a parametrized build. I've already got a specific build process that will build and deploy whenever any of these branches are pushed to GitHub: So if I've just pushed develop and it built successfully, how do I trigger a manual build and have it pull feature/my-new-feature (without doing a git push)? I tried enabling parametrized build, adding a new string called branch, and then adding a new branch specifier, */$branch. I then ran a build and set branch to feature/my-new-feature and it still pulled from develop.
Best solution can be: Add a string parameter in the existing job Then in the Source Code Management section update Branches to build to use the string parameter you defined: If you see a checkbox labeled Lightweight checkout, make sure it is unchecked. The configuration indicated in the images will tell the jenkins job to use master as the default branch, and for manual builds it will ask you to enter branch details (FYI: by default it's set to master)
Jenkins
32,108,380
135
Installing a plugin from the Update center results in: Checking internet connectivity Failed to connect to http://www.google.com/. Perhaps you need to configure HTTP proxy? Deploy Plugin Failure - Details hudson.util.IOException2: Failed to download from http://updates.jenkins-ci.org/download/plugins/deploy/1.9/deploy.hpi Is it possible to download the plugin and install it manually into Jenkins?
Yes, you can. Download the plugin (*.hpi file) and put it in the following directory: <jenkinsHome>/plugins/ Afterwards you will need to restart Jenkins.
Jenkins
14,950,408
135
When you are using a free style project you can set that after 20 minutes the build is aborted if not concluded. How is this possible with a Jenkins Multi Branch Pipeline Project?
You can use the timeout step: timeout(20) { node { sh 'foo' } } If you need a different TimeUnit than MINUTES, you can supply the unit argument: timeout(time: 20, unit: 'SECONDS') { EDIT Aug 2018: Nowadays with the more common declarative pipelines (easily recognized by the top-level pipeline construct), timeouts can also be specified using options on different levels (per overall pipeline or per stage): pipeline { options { timeout(time: 1, unit: 'HOURS') } stages { .. } // .. } Still, if you want to apply a timeout to a single step in a declarative pipeline, it can be used as described above.
Jenkins
38,096,004
133
I'm adding continuous integration to an EC2 project at work using Jenkins. The Jenkins machine itself is kept on an EC2 machine - one that might need to be taken offline and brought back on an entirely different EC2 instance at any point. We have a bunch of Puppet manifests allowing us to easily reinstall the software on the EC2 instance, but custom configuration files, like the ones for the jobs I create in Jenkins, would be deleted after the move. Now, if Jenkins stores what jobs are to be run on it in an XML file or set of XML files somewhere, I could set up a system where those files are committed to the version control server, and then downloaded back to a newly-created server as part of the puppet manifest. Does anyone know where these files are stored? I've tried copying /var/lib/jenkins/jobs, but that appears to store the output of Jenkins' jobs, not the input.
Jenkins stores some of the related builds data like the following: The working directory is stored in the directory {JENKINS_HOME}/workspace/. Each job store its related temporal workspace folder in the directory {JENKINS_HOME}/workspace/{JOBNAME} The configuration for all jobs stored in the directory {JENKINS_HOME}/jobs/. Each job store its related builds data in the directory {JENKINS_HOME}/jobs/{JOBNAME} Each job folder contains: The job configuration file is {JENKINS_HOME}/jobs/{JOBNAME}/config.xml The job builds are stored in {JENKINS_HOME}/jobs/{JOBNAME}/builds/ See the [Jenkins documentation] (https://www.jenkins.io/doc/book/system-administration/) for a visual representation and further details. JENKINS_HOME +- config.xml (jenkins root configuration) +- *.xml (other site-wide configuration files) +- userContent (files in this directory will be served under your http://server/userContent/) +- fingerprints (stores fingerprint records) +- nodes (slave configurations) +- plugins (stores plugins) +- secrets (secretes needed when migrating credentials to other servers) +- workspace (working directory for the version control system) +- [JOBNAME] (sub directory for each job) +- jobs +- [JOBNAME] (sub directory for each job) +- config.xml (job configuration file) +- latest (symbolic link to the last successful build) +- builds +- [BUILD_ID] (for each build) +- build.xml (build result summary) +- log (log file) +- changelog.xml (change log)
Jenkins
6,131,114
130
I'm using SVN, Maven 3.0.3 on the latest version of Jenkins and the Maven Release plugin. I'm trying to use the Maven release plugin (through Jenkins) do a dry run and so am executing the options … Executing Maven: -B -f /scratch/jenkins/workspace/myproject/myproject/pom.xml -DdevelopmentVersion=53.0.0-SNAPSHOT -DreleaseVersion=52.0.0 -Dusername=***** -Dpassword=********* -DskipTests -P prod -Dresume=false -DdryRun=true release:prepare But the dry run is dying with the error below … [JENKINS] Archiving /scratch/jenkins/workspace/myproject/myproject/pom.xml to /home/evotext/hudson_home/jobs/myproject/modules/org.mainco.subco$myproject/builds/2013-11-18_16-09-14/archive/org.mainco.subco/myproject/52.0.0/myproject-52.0.0.pom Waiting for Jenkins to finish collecting data mavenExecutionResult exceptions not empty message : Failed to execute goal org.apache.maven.plugins:maven-release-plugin:2.0:prepare (default-cli) on project myproject: You don't have a SNAPSHOT project in the reactor projects list. cause : You don't have a SNAPSHOT project in the reactor projects list. Stack trace : org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-release-plugin:2.0:prepare (default-cli) on project myproject: You don't have a SNAPSHOT project in the reactor projects list. at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.jvnet.hudson.maven3.launcher.Maven3Launcher.main(Maven3Launcher.java:117) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.codehaus.plexus.classworlds.launcher.Launcher.launchStandard(Launcher.java:329) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:239) at org.jvnet.hudson.maven3.agent.Maven3Main.launch(Maven3Main.java:178) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at hudson.maven.Maven3Builder.call(Maven3Builder.java:129) at hudson.maven.Maven3Builder.call(Maven3Builder.java:67) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.maven.plugin.MojoFailureException: You don't have a SNAPSHOT project in the reactor projects list. at org.apache.maven.plugins.release.PrepareReleaseMojo.prepareRelease(PrepareReleaseMojo.java:219) at org.apache.maven.plugins.release.PrepareReleaseMojo.execute(PrepareReleaseMojo.java:181) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) ... 30 more Caused by: org.apache.maven.shared.release.ReleaseFailureException: You don't have a SNAPSHOT project in the reactor projects list. at org.apache.maven.shared.release.phase.CheckPomPhase.execute(CheckPomPhase.java:111) at org.apache.maven.shared.release.phase.CheckPomPhase.simulate(CheckPomPhase.java:123) at org.apache.maven.shared.release.DefaultReleaseManager.prepare(DefaultReleaseManager.java:199) at org.apache.maven.shared.release.DefaultReleaseManager.prepare(DefaultReleaseManager.java:140) at org.apache.maven.shared.release.DefaultReleaseManager.prepare(DefaultReleaseManager.java:103) at org.apache.maven.plugins.release.PrepareReleaseMojo.prepareRelease(PrepareReleaseMojo.java:211) ... 33 more My SVN checkout method is set to "Always checkout a fresh copy" and I have a snapshot version in question in my snapshot repository, but not in my release repository. Is there a way I can get the "reactor projects list" to look at my snapshot repo? Edit: I'm including the snippet of my pom where the project gets its version -- it inherits it from a parent <parent> <artifactId>subco</artifactId> <groupId>org.mainco.subco</groupId> <version>52.0.0</version> </parent>
You're trying to release an artifact that's not a snapshot. That means your artifact's version number is something like 3.0.3. That version number implies its already been released. You can't release a release. There would be no changes in between and therefore no point. You're only supposed to release SNAPSHOT versions. That means your version number would be like 3.0.3-SNAPSHOT.
Jenkins
20,054,185
128
I have a little problem. The Problem: I am trying to build a gradle of my Android Project on Jenkins and now I am standing on this problem i can't resolve. During the Building I have this Error message: :Client:mergeDebugResources /var/lib/jenkins/workspace/LMA-Client/Client/build/exploded-aar/com.google.android.gms/play-services/3.1.59/res/drawable-hdpi/common_signin_btn_text_focus_light.9.png: Error: Cannot run program "/opt/android-sdk/build-tools/19.0.1/aapt": java.io.IOException: error=2, No such file or directory :Client:mergeDebugResources FAILED You can imagine that this aapt... yep its there and the png... its there too, so the mistake must be somewhere else. The Solution? Now I googled 1-2 hours around, surfed on this great Website and what I found is that if Jenkins runs on a 64-bit system, I need to install the ia32-libs. Like that: sudo apt-get install ia32-libs now I tried that, and I couldn't install it: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch so I tried to install "ia32-libs-multiarch", but again: The following packages have unmet dependencies: ia32-libs-multiarch:i386 : Depends: libgphoto2-2:i386 but it is not going to be installed Depends: libsane:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. Finally so finally im standing here and asking me: is that really the solution? And why should I install this thing? And how? So please help me, I think I am not far away from the answer.
I had the following similar error on Ubuntu 13.10: Cannot run program "/usr/local/android-sdk-linux/build-tools/19.0.3/aapt": error=2, No such file or directory And this answer fixed it for me: To get aapt working (this fixed my issues with the avd as well) just install these two packages: sudo apt-get install lib32stdc++6 lib32z1
Jenkins
22,701,405
126
I am trying to run a block if a directory exists in my jenkins workspace and the pipeline step "fileExists: Verify file exists" in workspace doesn't seem to work correctly. I'm using Jenkins v 1.642 and Pipeline v 2.1. and trying to have a condition like if ( fileExists 'test1' ) { //Some block } What are the other alternatives I have within the pipeline?
You need to use brackets when using the fileExists step in an if condition or assign the returned value to a variable Using variable: def exists = fileExists 'file' if (exists) { echo 'Yes' } else { echo 'No' } Using brackets: if (fileExists('file')) { echo 'Yes' } else { echo 'No' }
Jenkins
38,534,781
125
The horror stories I found while searching for an answer for this one... OK, I have a .sh script which pretty much does everything Jenkins supposed to do: checks out sources from SVN build the project deploys the project cleans after itself So in Jenkins I only have to 'build' the project by running the script in an Execute Shell command. The script is ran (the sources are downloaded, the project is build/deploy) but then it marks the build as a failure: Build step 'Execute shell' marked build as failure Even if the script was successfully ran! I tried closing the script with: exit 0 (still marks it as failure) exit 1 (marks it as failure, as expected) no exit command at all (marks it as failure) When, how and why does Execute Shell mark my build as a failure?
First things first, hover the mouse over the grey area below. Not part of the answer, but absolutely has to be said: If you have a shell script that does "checkout, build, deploy" all by itself, then why are you using Jenkins? You are foregoing all the features of Jenkins that make it what it is. You might as well have a cron or an SVN post-commit hook call the script directly. Jenkins performing the SVN checkout itself is crucial. It allows the builds to be triggered only when there are changes (or on timer, or manual, if you prefer). It keeps track of changes between builds. It shows those changes, so you can see which build was for which set of changes. It emails committers when their changes caused successful or failed build (again, as configured as you prefer). It will email committers when their fixes fixed the failing build. And more and more. Jenkins archiving the artifacts also makes them available, per build, straight off Jenkins. While not as crucial as the SVN checkout, this is once again an integral part of what makes it Jenkins. Same with deploying. Unless you have a single environment, deployment usually happens to multiple environments. Jenkins can keep track of which environment a specific build (with specific set of SVN changes) is deployed it, through the use of Promotions. You are foregoing all of this. It sounds like you are told "you have to use Jenkins" but you don't really want to, and you are doing it just to get your bosses off your back, just to put a checkmark "yes, I've used Jenkins" The short answer is: the exit code of last command of the Jenkin's Execute Shell build step is what determines the success/failure of the Build Step. 0 - success, anything else - failure. Note, this is determining the success/failure of the build step, not the whole job run. The success/failure of the whole job run can further be affected by multiple build steps, and post-build actions and plugins. You've mentioned Build step 'Execute shell' marked build as failure, so we will focus just on a single build step. If your Execute shell build step only has a single line that calls your shell script, then the exit code of your shell script will determine the success/failure of the build step. If you have more lines, after your shell script execution, then carefully review them, as they are the ones that could be causing failure. Finally, have a read here Jenkins Build Script exits after Google Test execution. It is not directly related to your question, but note that part about Jenkins launching the Execute Shell build step, as a shell script with /bin/sh -xe The -e means that the shell script will exit with failure, even if just 1 command fails, even if you do error checking for that command (because the script exits before it gets to your error checking). This is contrary to normal execution of shell scripts, which usually print the error message for the failed command (or redirect it to null and handle it by other means), and continue. To circumvent this, add set +e to the top of your shell script. Since you say your script does all it is supposed to do, chances are the failing command is somewhere at the end of the script. Maybe a final echo? Or copy of artifacts somewhere? Without seeing the full console output, we are just guessing. Please post the job run's console output, and preferably the shell script itself too, and then we could tell you exactly which line is failing.
Jenkins
22,814,559
122
I have downloaded "jenkins-1.501.zip" from http://jenkins-ci.org/content/thank-you-downloading-windows-installer . I have extracted zip file and installed Jenkins on Windows 7 successfully. Jenkins runs at http://localhost:8080/ well. I want to stop Jenkins service from console. How can I do that? What's the way to start and restart through console/command line?
Open Console/Command line --> Go to your Jenkins installation directory (very likely cd "C:\Program Files\Jenkins\"). Execute the following commands respectively: to stop: .\jenkins.exe stop to start: .\jenkins.exe start to restart: .\jenkins.exe restart
Jenkins
14,869,311
121
I'm trying to convert my old style project base workflow to a pipeline based on Jenkins. While going through docs I found there are two different syntaxes named scripted and declarative. Such as the Jenkins web declarative syntax release recently (end of 2016). Although there is a new syntax release Jenkins still supports scripted syntax as well. Now, I'm not sure in which situation each of these two types would be a best match. So will declarative be the future of the Jenkins pipeline? Anyone who can share some thoughts about these two syntax types.
When Jenkins Pipeline was first created, Groovy was selected as the foundation. Jenkins has long shipped with an embedded Groovy engine to provide advanced scripting capabilities for admins and users alike. Additionally, the implementors of Jenkins Pipeline found Groovy to be a solid foundation upon which to build what is now referred to as the "Scripted Pipeline" DSL. As it is a fully featured programming environment, Scripted Pipeline offers a tremendous amount of flexibility and extensibility to Jenkins users. The Groovy learning-curve isn’t typically desirable for all members of a given team, so Declarative Pipeline was created to offer a simpler and more opinionated syntax for authoring Jenkins Pipeline. The two are both fundamentally the same Pipeline sub-system underneath. They are both durable implementations of "Pipeline as code." They are both able to use steps built into Pipeline or provided by plugins. Both are able to utilize Shared Libraries Where they differ however is in syntax and flexibility. Declarative limits what is available to the user with a more strict and pre-defined structure, making it an ideal choice for simpler continuous delivery pipelines. Scripted provides very few limits, insofar that the only limits on structure and syntax tend to be defined by Groovy itself, rather than any Pipeline-specific systems, making it an ideal choice for power-users and those with more complex requirements. As the name implies, Declarative Pipeline encourages a declarative programming model. Whereas Scripted Pipelines follow a more imperative programming model. Copied from Syntax Comparison
Jenkins
43,484,979
119
Solved: Thanks to below answer from S.Richmond. I needed to unset all stored maps of the groovy.json.internal.LazyMap type which meant nullifying the variables envServers and object after use. Additional: People searching for this error might be interested to use the Jenkins pipeline step readJSON instead - find more info here. I am trying to use Jenkins Pipeline to take input from the user which is passed to the job as json string. Pipeline then parses this using the slurper and I pick out the important information. It will then use that information to run 1 job multiple times in parallel with differeing job parameters. Up until I add the code below "## Error when below here is added" the script will run fine. Even the code below that point will run on its own. But when combined I get the below error. I should note that the triggered job is called and does run succesfully but the below error occurs and fails the main job. Because of this the main job does not wait for the return of the triggered job. I could try/catch around the build job: however I want the main job to wait for the triggered job to finish. Can anyone assist here? If you need anymore information let me know. Cheers def slurpJSON() { return new groovy.json.JsonSlurper().parseText(BUILD_CHOICES); } node { stage 'Prepare'; echo 'Loading choices as build properties'; def object = slurpJSON(); def serverChoices = []; def serverChoicesStr = ''; for (env in object) { envName = env.name; envServers = env.servers; for (server in envServers) { if (server.Select) { serverChoicesStr += server.Server; serverChoicesStr += ','; } } } serverChoicesStr = serverChoicesStr[0..-2]; println("Server choices: " + serverChoicesStr); ## Error when below here is added stage 'Jobs' build job: 'Dummy Start App', parameters: [[$class: 'StringParameterValue', name: 'SERVER_NAME', value: 'TestServer'], [$class: 'StringParameterValue', name: 'SERVER_DOMAIN', value: 'domain.uk'], [$class: 'StringParameterValue', name: 'APP', value: 'application1']] } Error: java.io.NotSerializableException: groovy.json.internal.LazyMap at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:860) at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:569) at org.jboss.marshalling.river.BlockMarshaller.doWriteObject(BlockMarshaller.java:65) at org.jboss.marshalling.river.BlockMarshaller.writeObject(BlockMarshaller.java:56) at org.jboss.marshalling.MarshallerObjectOutputStream.writeObjectOverride(MarshallerObjectOutputStream.java:50) at org.jboss.marshalling.river.RiverObjectOutputStream.writeObjectOverride(RiverObjectOutputStream.java:179) at java.io.ObjectOutputStream.writeObject(Unknown Source) at java.util.LinkedHashMap.internalWriteEntries(Unknown Source) at java.util.HashMap.writeObject(Unknown Source) ... ... Caused by: an exception which occurred: in field delegate in field closures in object org.jenkinsci.plugins.workflow.cps.CpsThreadGroup@5288c
Use JsonSlurperClassic instead. Since Groovy 2.3 (note: Jenkins 2.7.1 uses Groovy 2.4.7) JsonSlurper returns LazyMap instead of HashMap. This makes new implementation of JsonSlurper not thread safe and not serializable. This makes it unusable outside of @NonDSL functions in pipeline DSL scripts. However you can fall-back to groovy.json.JsonSlurperClassic which supports old behavior and could be safely used within pipeline scripts. Example import groovy.json.JsonSlurperClassic @NonCPS def jsonParse(def json) { new groovy.json.JsonSlurperClassic().parseText(json) } node('master') { def config = jsonParse(readFile("config.json")) def db = config["database"]["address"] ... } ps. You still will need to approve JsonSlurperClassic before it could be called.
Jenkins
37,864,542
119
For example: var output=sh "echo foo"; echo "output=$output"; I will get: output=0 So, apparently I get the exit code rather than the stdout. Is it possible to capture the stdout into a pipeline variable, such that I could get: output=foo as my result?
Now, the sh step supports returning stdout by supplying the parameter returnStdout. // These should all be performed at the point where you've // checked out your sources on the slave. A 'git' executable // must be available. // Most typical, if you're not cloning into a sub directory gitCommit = sh(returnStdout: true, script: 'git rev-parse HEAD').trim() // short SHA, possibly better for chat notifications, etc. shortCommit = gitCommit.take(6) See this example.
Jenkins
36,507,410
119
Somehow in my app many of the cordova plugins are installed and because of that it requires access to almost everything - from my contacts to current location ( even though this app doesn't need this ). This app is build via jenkins and as far as I understand one solution is to remove every plugin with single command, so it will be like: cordova plugin rm org.apache.cordova.battery-status cordova plugin rm org.apache.cordova.camera cordova plugin rm org.apache.cordova.contacts cordova plugin rm org.apache.cordova.geolocation cordova plugin rm org.apache.cordova.media cordova plugin rm org.apache.cordova.media-capture cordova plugin rm org.apache.cordova.splashscreen cordova plugin rm org.apache.cordova.vibration But sometimes it shows some errors and with jenkins any error ends up with build failure, so is there any command which deletes all plugins? ( during installation basics plugins which requires any app to work are added automatically via cordova, so I was looking for some cordova plugin rm -all but couldn't find it )
First, you should list your plugins: cordova plugin list With this result, you can simply do: cordova plugin remove <PLUGIN_NAME> For example: cordova plugin remove org.apache.cordova.media
Jenkins
21,932,758
119
Under certain conditions I want to fail the build. How do I do that? I tried: throw RuntimeException("Build failed for some specific reason!") This does in fact fail the build. However, the log shows the exception: org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use new java.lang.RuntimeException java.lang.String Which is a bit confusing to users. Is there a better way?
You can use the error step from the pipeline DSL to fail the current build. error("Build failed because of this and that..")
Jenkins
37,685,958
118
I'm running Jenkins 2 with the Pipeline plugin. I have setup a Multi-branch Pipeline project where each branch (master, develop, etc.) has a Jenkinsfile in the root. Setting this up was simple. However, I'm at a loss for how to have each branch run periodically (not the branch indexing), even when the code does not change. What do I need to put in my Jenkinsfile to enable periodic builds?
If you use a declarative style Pipeline and only want to trigger the build on a specific branch you can do something like this: String cron_string = BRANCH_NAME == "master" ? "@hourly" : "" pipeline { agent none triggers { cron(cron_string) } stages { // do something } } Found on Jenkins Jira
Jenkins
39,168,861
117
The groovy syntax generator is NOT working for sample step properties: Set Job Properties. I've selected Discard old builds and then entered 10 in the Max # of builds to keep field and then Generate Groovy and nothing shows up. Jenkins version: 2.7
As for declarative syntax, you can use the options block: pipeline { options { buildDiscarder(logRotator(numToKeepStr: '30', artifactNumToKeepStr: '30')) } ... } Parameters for logRotator (from the source code): daysToKeepStr: history is only kept up to this days. numToKeepStr: only this number of build logs are kept. artifactDaysToKeepStr: artifacts are only kept up to this days. artifactNumToKeepStr: only this number of builds have their artifacts kept. More information can be found in Cloudbees knowledge base and in the docs for options block.
Jenkins
39,542,485
116
When setting up how Jenkins shoul pull changes from subversion I tried checked Poll SCM and set schedule to 5 * * * *, I get the following warning Spread load evenly by using ‘H * * * *’ rather than ‘5 * * * *’ I'm not sure what H means in this context and why I should use that.
H stands for Hash To allow periodically scheduled tasks to produce even load on the system, the symbol H (for “hash”) should be used wherever possible. For example, using 0 0 * * * for a dozen daily jobs will cause a large spike at midnight. In contrast, using H H * * * would still execute each job once a day, but not all at the same time, better using limited resources.
Jenkins
26,383,778
115
I'm looking to run automated NUnit tests for a C# application, nightly and on each commit to svn. Is this something that Jenkins-CI can do? Is there an online tutorial or how-to document which documents a similar setup that I can look at?
I needed to do exactly what you do, here's how I setup Jenkins to do this: Add the NUnit Plugin to Jenkins In your project go to Configure -> Build -> Add a build step In the dropdown scroll down to -> Execute Windows Batch Command Ensure this step is placed after your MSBuild step Add the following, replacing the variables: Single dll test: [PathToNUnit]\bin\nunit-console.exe [PathToTestDll]\Selenium.Tests.dll /xml=nunit-result.xml Multiple dll test using NUnit test projects: [PathToNUnit]\bin\nunit-console.exe [PathToTests]\Selenium.Tests.nunit /xml=nunit-result.xml Under Post-build Actions, tick Publish NUnit test result report For the textbox Test report XMLs, enter nunit-result.xml Once you project has been built, NUNit will now run and the results will be viewable either on the Dashboard(if you hover over the Weather report icon) or on the project page under Last Test Result. You could also run the command from within Visual Studio or as part of you local build process. Here's two blog posts I used for reference. I didn't find any that fitted my requirements exactly: 1-Hour Guide to Continuous Integration Setup: Jenkins meets .Net (2011) Guide to building .NET projects using Hudson (2008)
Jenkins
9,121,312
114
I am trying to use the Jenkins REST API. In the instructions it says I need to have the API key. I have looked all over the configuration pages to find it. How do I get the API key for Jenkins?
Since Jenkins 2.129 the API token configuration has changed: You can now have multiple tokens and name them. They can be revoked individually. Log in to Jenkins. Click your name (upper-right corner). Click Configure (left-side menu). Use "Add new Token" button to generate a new one then name it. You must copy the token when you generate it as you cannot view the token afterwards. Revoke old tokens when no longer needed. Before Jenkins 2.129: Show the API token as follows: Log in to Jenkins. Click your name (upper-right corner). Click Configure (left-side menu). Click Show API Token. The API token is revealed. You can change the token by clicking the Change API Token button.
Jenkins
45,466,090
113