question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
My project size is 1,63 GB (Magento Project) I had followed this tutorial when I do this command : git push -u origin master , it is starting to write objects and after that I getting this error in git console: error: RPC failed, result=22, HTTP code = 502 fatal: The remote end hung up unexpectedly fatal: The remote end hung up unexpectedly What should I do to make this work ? The result of the git remote -v is :
The remote end hangs up because the pack size you are trying to transmit exceeds the maximum HTTP post size. Try to limit this pack size with git config --local http.postBuffer 157286400 to 150MB.
GitLab
24,322,896
33
tl;dr How do I pass data, e.g. the $BUILD_VERSION variable, between jobs in different pipelines in Gitlab CI? So (in my case) this: Pipeline 1 on push ect. Pipeline 2 after merge `building` job ... `deploying` job β”‚ β–² └─────── $BUILD_VERSION β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Background Consider the following example (full yml below): building: stage: staging # only on merge requests rules: # execute when a merge request is open - if: $CI_PIPELINE_SOURCE == "merge_request_event" when: always - when: never script: - echo "BUILD_VERSION=1.2.3" > build.env artifacts: reports: dotenv: build.env deploying: stage: deploy # after merge request is merged rules: # execute when a branch was merged to staging - if: $CI_COMMIT_BRANCH == $STAGING_BRANCH when: always - when: never dependencies: - building script: - echo $BUILD_VERSION I have two stages, staging and deploy. The building job in staging builds the app and creates a "Review App" (no separate build stage for simplicity). The deploying job in deploy then uploads the new app. The pipeline containing the building job runs whenever a merge request is opened. This way the app is built and the developer can click on the "Review App" icon in the merge request. The deploying job is run right after the merge request is merged. The idea is the following: *staging* stage (pipeline 1) *deploy* stage (pipeline 2) <open merge request> -> `building` job (and show) ... <merge> -> `deploying` job β”‚ β–² └───────────── $BUILD_VERSION β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The problem for me is, that the staging/building creates some data, e.g. a $BUILD_VERSION. I want to have this $BUILD_VERSION in the deploy/deploying, e.g. for creating a new release via the Gitlab API. So my question is: How do I pass the $BUILD_VERSION (and other data) from staging/building to deploy/deploying? What I've tried so far artifacts.reports.dotenv The described case is more less handled in the gitlab docs in Pass an environment variable to another job. Also the yml file shown below is heavily inspired by this example. Still, it does not work. The build.env artifact is created in building, but whenever the deploying job is executed, the build.env file gets removed as shown below in line 15: "Removing build.env". I tried to add build.env to the .gitignore but it still gets removed. After hours of searching I found in this gitlab issue comment and this stackoverflow post that the artifacts.reports.dotenv doesn't work with the dependencies or the needs keywords. Removing dependencies doesn't work. Using needs only doesn't work either. Using both is not allowed. Does anyone know a way how to get this to work? I feel like this is the way it should work. Getting the artifacts as a file This answer of the stackoverflow post Gitlab ci cd removes artifact for merge requests suggests to use the build.env as a normal file. I also tried this. The (relevant) yml is the following: building: # ... artifacts: paths: - build.env deploying: # ... before_script: - source build.env The result is the same as above. The build.env gets removed. Then the source build.env command fails because build.env does not exist. (Doesn't matter if build.env is in the .gitignore or not, tested both) Getting the artifacts from the API I also found the answer of the stackoverflow post Use artifacts from merge request job in GitLab CI which suggests to use the API together with $CI_JOB_TOKEN. But since I need the artifacts in a non-merge-request pipeline, I cannot use the suggested CI_MERGE_REQUEST_REF_PATH. I tried to use $CI_COMMIT_REF_NAME. The (important section of the) yml is then: deploying: # ... script: - url=$CI_API_V4_URL/projects/jobs/artifacts/$CI_COMMIT_REF_NAME/download?job=building - echo "Downloading $url" - 'curl --header "JOB-TOKEN: ${CI_JOB_TOKEN}" --output $url' # ... But this the API request gets rejected with "404 Not Found". Since commit SHAs are not supported, $CI_COMMIT_BEFORE_SHA or $CI_COMMIT_SHA do not work either. Using needs Update: I found the section Artifact downloads between pipelines in the same project in the gitlab docs which is exactly what I want. But: I can't get it to work. The yml looks like the following after more less copying from the docs: building: # ... artifacts: paths: - version expire_in: never deploying: # ... needs: - project: $CI_PROJECT_PATH job: building ref: staging # building runs on staging branch, main doesn't work either artifacts: true Now the deploying job instantly fails and I get the following error banner: I tried to set artifacts.expire_in = never (as shown) but I still get the same error. Also in Settings > CI/CD > Artifacts "Keep artifacts from most recent successful jobs" is selected. So the artifact should be present. What did I miss here? This should work according to the docs! I hope somebody can help me on getting the $BUILD_VERSION to the deploying job. If there are other ways than the ones I've tried, I'm very happy to hear them. Thanks in advance. The example .gitlab-ci.yml: stages: - staging - deploy building: tags: - docker image: bash stage: staging rules: - if: ($CI_PIPELINE_SOURCE == "merge_request_event") && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "staging" when: always - when: never script: - echo "BUILD_VERSION=1.2.3" > build.env artifacts: reports: dotenv: build.env environment: name: Example url: https://example.com deploying: tags: - docker image: bash stage: deploy rules: - if: $CI_COMMIT_BRANCH == "staging" when: always - when: never dependencies: - building script: echo $BUILD_VERSION
Going by the Gitlab docs, it should be possible to download any job's artifact by URL, if it hasn't expired yet. In addition, you can use the Gitlab API to download (unexpired) artifacts from other projects, too; and you can use the Gitlab API to mark a job's artifacts for keeping-regardless-of-expiry-policy, or to delete an artifact. But I have not tried this myself. For your case, assuming the 'building' and 'deploying' jobs both run on the main branch, you can hopefully pass the artifact like so. If you have some other way of finding out in the deploying job what branch name X the building job ran on, then you can download the artefact from branch X instead of always from main like I do below. # Assuming # domain: example.com # namespace: mygroup # project: myproject building: # on latest commit on `main`, because we need a predictable # branch name to save/retrieve the artifact. stage: staging script: - echo "BUILD_VERSION=1.2.3" > build.env artifacts: # 'paths', not 'reports' paths: - build.env deploying: # after merge request is merged stage: deploy dependencies: - building script: # could use ${CI_PROJECT_URL} to get https://example.com/mygroup/myproj - curl https://${CI_SERVER_HOST}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}/-/jobs/artifacts/main/raw/build.env?job=building > build.env - source build.env - echo $BUILD_VERSION # should print '1.2.3'. The relevant parts of the docs, with links and excerpts: Access a branch or tag's latest job artifacts by URL To browse or download the latest artifacts of a branch, use one of these two urls. [I think the /file/ variant is used for Gitlab Pages artifacts, but I'm not sure. --Esteis] Browse artifacts: https://example.com/<namespace>/<project>/-/jobs/artifacts/<ref>/browse?job=<job_name> Download zip of all artifacts: https://example.com/<namespace>/<project>/-/jobs/artifacts/<ref>/download?job=<job_name> Download one artifact file: https://example.com/<namespace>/<project>/-/jobs/artifacts/<ref>/raw/<path/to/file>?job=<job_name> Download one artifact file (Gitlab Pages-related?): https://example.com/<namespace>/<project>/-/jobs/artifacts/<ref>/file/<path>?job=<job_name> For example, to download an artifact with domain gitlab.com, namespace gitlab-org, project gitlab, latest commit on main branch, job coverage, file path review/index.html: https://gitlab.com/gitlab-org/gitlab/-/jobs/artifacts/main/raw/review/index.html?job=coverage Config setting: Keep artifacts from each branch's most recent succesful jobs This option is on by default AFAICT it keeps the most recent artifact for every active branch or tag (a.k.a. a 'ref'); if multiple pipelines are run on that ref, last pipeline's artifacts overwrite those produced by earlier pipelines. All other artifacts are still governed by the expire_in setting in the .gitlab-ci.yml that produced them. Gitlab API for job artifacts Advantage of using the Gitlab API is that if you can get the right tokens, you can also download artifacts from other projects. You'll need the numeric project ID -- that's $CI_PROJECT_ID, if your script is running in Gitlab CI. To download an artifact archive: curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/${CI_PROJECT_ID}/jobs/artifacts/main/download?job=test" To download a single artifact file: curl --location --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/${CI_PROJECT_ID}/jobs/artifacts/main/raw/some/release/file.pdf?job=pdf"
GitLab
68,179,565
32
GitLab's running in kubernetes cluster. Runner can't build docker image with build artifacts. I've already tried several approaches to fix this, but no luck. Here are some configs snippets: .gitlab-ci.yml image: docker:latest services: - docker:dind variables: DOCKER_DRIVER: overlay stages: - build - package - deploy maven-build: image: maven:3-jdk-8 stage: build script: "mvn package -B --settings settings.xml" artifacts: paths: - target/*.jar docker-build: stage: package script: - docker build -t gitlab.my.com/group/app . - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN gitlab.my.com/group/app - docker push gitlab.my.com/group/app config.toml concurrent = 1 check_interval = 0 [[runners]] name = "app" url = "https://gitlab.my.com/ci" token = "xxxxxxxx" executor = "kubernetes" [runners.kubernetes] privileged = true disable_cache = true Package stage log: running with gitlab-ci-multi-runner 1.11.1 (a67a225) on app runner (6265c5) Using Kubernetes namespace: default Using Kubernetes executor with image docker:latest ... Waiting for pod default/runner-6265c5-project-4-concurrent-0h9lg9 to be running, status is Pending Waiting for pod default/runner-6265c5-project-4-concurrent-0h9lg9 to be running, status is Pending Running on runner-6265c5-project-4-concurrent-0h9lg9 via gitlab-runner-3748496643-k31tf... Cloning repository... Cloning into '/group/app'... Checking out 10d5a680 as master... Skipping Git submodules setup Downloading artifacts for maven-build (61)... Downloading artifacts from coordinator... ok id=61 responseStatus=200 OK token=ciihgfd3W $ docker build -t gitlab.my.com/group/app . Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1 What am I doing wrong?
Don't need to use this: DOCKER_DRIVER: overlay cause it seems like OVERLAY isn't supported, so svc-0 container is unable to start with it: $ kubectl logs -f `kubectl get pod |awk '/^runner/{print $1}'` -c svc-0 time="2017-03-20T11:19:01.954769661Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]" time="2017-03-20T11:19:01.955720778Z" level=info msg="libcontainerd: new containerd process, pid: 20" time="2017-03-20T11:19:02.958659668Z" level=error msg="'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded." Also, add export DOCKER_HOST="tcp://localhost:2375" to the docker-build: docker-build: stage: package script: - export DOCKER_HOST="tcp://localhost:2375" - docker build -t gitlab.my.com/group/app . - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN gitlab.my.com/group/app - docker push gitlab.my.com/group/app
GitLab
42,867,039
32
I'm currently working on a deployment script to run as part of my GitLab CI setup. What I want is to copy a file from one location to another and rename it. Now I want to be able to find what commit that file was generated with, so I'd like to add the hash of the commit to it. For that to work I'd like to use something like this: cp myLogFile.log /var/log/gitlab-runs/$COMMITHASH.log The output should be a file named eg. /var/log/gitlab-runs/9b43adf.log How is this possible to achieve using GitLab CI?
In your example you used the short git hash that you would get with the predefined variable CI_COMMIT_SHA by building a substring like this: ${CI_COMMIT_SHA:0:8} or by using the short sha directly $CI_COMMIT_SHORT_SHA
GitLab
35,064,320
32
How does one retrieve an arbritrary user's ssh public keys from GitLab? GitHub provides this feature. For example: https://github.com/winny-.keys The GitLab API exposes public keys, however it looks like it requires: Separate authentication Query a given user name for its UID Finally get the public keys
GitHub style ssh public key access was added in GitLab 6.6.0 using the following scheme: http://__HOST__/__USERNAME__.keys (thanks @bastelflp). Currently we are running 6.2.3, and we will upgrade.
GitLab
24,839,301
32
I have a gitlab pipeline where there are two stages, one is build and the other one is deploy. The build stage is run when a commit is made. I want a way to run the deploy job when the merge request is merged to master. I tried several things but no luck. Can anyone help? stages: - build - deploy dotnet: script: "echo This builds!" stage: build production: script: "echo This deploys!" stage: deploy only: refs: - master
Try using the gitlab-ci.yml "rules" feature to check for the merge request event. Your current gitlab-ci.yml will run your "dotnet" job every commit, merge request, schedule, and manually triggered pipeline. https://docs.gitlab.com/ee/ci/yaml/#workflowrules dotnet: script: "echo This builds!" stage: build rules: - if: '$CI_COMMIT_REF_NAME != "master" && $CI_PIPELINE_SOURCE == "push" || $CI_PIPELINE_SOURCE == "merge_request_event"' production: script: "echo This deploys!" stage: deploy rules: - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_REF_NAME == "master"'
GitLab
63,893,431
31
So I have 2 similar deployments on k8s that pulls the same image from GitLab. Apparently this resulted in my second deployment to go on a CrashLoopBackOff error and I can't seem to connect to the port to check on the /healthz of my pod. Logging the pod shows that the pod received an interrupt signal while describing the pod shows the following message. FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 29m 29m 1 default-scheduler Normal Scheduled Successfully assigned java-kafka-rest-kafka-data-2-development-5c6f7f597-5t2mr to 172.18.14.110 29m 29m 1 kubelet, 172.18.14.110 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-m4m55" 29m 29m 1 kubelet, 172.18.14.110 spec.containers{consul} Normal Pulled Container image "..../consul-image:0.0.10" already present on machine 29m 29m 1 kubelet, 172.18.14.110 spec.containers{consul} Normal Created Created container 29m 29m 1 kubelet, 172.18.14.110 spec.containers{consul} Normal Started Started container 28m 28m 1 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Killing Killing container with id docker://java-kafka-rest-development:Container failed liveness probe.. Container will be killed and recreated. 29m 28m 2 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Created Created container 29m 28m 2 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Started Started container 29m 27m 10 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Warning Unhealthy Readiness probe failed: Get http://10.5.59.35:7533/healthz: dial tcp 10.5.59.35:7533: getsockopt: connection refused 28m 24m 13 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Warning Unhealthy Liveness probe failed: Get http://10.5.59.35:7533/healthz: dial tcp 10.5.59.35:7533: getsockopt: connection refused 29m 19m 8 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Normal Pulled Container image "r..../java-kafka-rest:0.3.2-dev" already present on machine 24m 4m 73 kubelet, 172.18.14.110 spec.containers{java-kafka-rest-development} Warning BackOff Back-off restarting failed container I have tried to redeploy the deployments under different images and it seems to work just fine. However I don't think this will be efficient as the images are the same throughout. How do I go on about this? Here's what my deployment file looks like: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: "java-kafka-rest-kafka-data-2-development" labels: repository: "java-kafka-rest" project: "java-kafka-rest" service: "java-kafka-rest-kafka-data-2" env: "development" spec: replicas: 1 selector: matchLabels: repository: "java-kafka-rest" project: "java-kafka-rest" service: "java-kafka-rest-kafka-data-2" env: "development" template: metadata: labels: repository: "java-kafka-rest" project: "java-kafka-rest" service: "java-kafka-rest-kafka-data-2" env: "development" release: "0.3.2-dev" spec: imagePullSecrets: - name: ... containers: - name: java-kafka-rest-development image: registry...../java-kafka-rest:0.3.2-dev env: - name: DEPLOYMENT_COMMIT_HASH value: "0.3.2-dev" - name: DEPLOYMENT_PORT value: "7533" livenessProbe: httpGet: path: /healthz port: 7533 initialDelaySeconds: 30 timeoutSeconds: 1 readinessProbe: httpGet: path: /healthz port: 7533 timeoutSeconds: 1 ports: - containerPort: 7533 resources: requests: cpu: 0.5 memory: 6Gi limits: cpu: 3 memory: 10Gi command: - /envconsul - -consul=127.0.0.1:8500 - -sanitize - -upcase - -prefix=java-kafka-rest/ - -prefix=java-kafka-rest/kafka-data-2 - java - -jar - /build/libs/java-kafka-rest-0.3.2-dev.jar securityContext: readOnlyRootFilesystem: true - name: consul image: registry.../consul-image:0.0.10 env: - name: SERVICE_NAME value: java-kafka-rest-kafka-data-2 - name: SERVICE_ENVIRONMENT value: development - name: SERVICE_PORT value: "7533" - name: CONSUL1 valueFrom: configMapKeyRef: name: consul-config-... key: node1 - name: CONSUL2 valueFrom: configMapKeyRef: name: consul-config-... key: node2 - name: CONSUL3 valueFrom: configMapKeyRef: name: consul-config-... key: node3 - name: CONSUL_ENCRYPT valueFrom: configMapKeyRef: name: consul-config-... key: encrypt ports: - containerPort: 8300 - containerPort: 8301 - containerPort: 8302 - containerPort: 8400 - containerPort: 8500 - containerPort: 8600 command: [ entrypoint, agent, -config-dir=/config, -join=$(CONSUL1), -join=$(CONSUL2), -join=$(CONSUL3), -encrypt=$(CONSUL_ENCRYPT) ] terminationGracePeriodSeconds: 30 nodeSelector: env: ...
To those having this problem, I've discovered the problem and solution to my question. Apparently the problem lies with my service.yml where my targetPort was aimed to a port different than the one I opened in my docker image. Make sure the port that's opened in the docker image connects to the right port. Hope this helps.
GitLab
53,535,540
31
Using Team City to check out from a Git Repo. (Gitlabs if it matters) Start with Empty build directory. Get this error: fatal: could not set 'core.filemode' to 'false' (Running on a Windows machine, if that matters) The user that Team City is running on was changed to an Admin just in case. The .Git directory is not a valid Repo when this command exits. Wiping the entire 'work' directory doesn't help. It randomly comes and goes... AND this: git config --global --replace-all core.fileMode false Does nothing useful - with or without the --replace-all, and run as admin, or another user (if you change 'false' to 'true' you get the same error, if you change it to 'falseCD' it changes the error to that being an invalid value - so clearly, it is changing it. Anyone got any ideas?
In my case using "sudo" worked for me. For example: asif@asif-vm:/mnt/prog/protobuf_tut$ git clone https://github.com/protocolbuffers/protobuf.git Cloning into 'protobuf'... error: chmod on /mnt/prog/protobuf_tut/protobuf/.git/config.lock failed: Operation not permitted fatal: could not set 'core.filemode' to 'false' After doing a "sudo" I could get it working: asif@asif-vm:/mnt/prog/protobuf_tut$ sudo git clone https://github.com/protocolbuffers/protobuf.git Cloning into 'protobuf'... remote: Enumerating objects: 5, done. remote: Counting objects: 100% (5/5), done. remote: Compressing objects: 100% (5/5), done. remote: Total 66782 (delta 0), reused 0 (delta 0), pack-reused 66777 Receiving objects: 100% (66782/66782), 55.83 MiB | 2.04 MiB/s, done. Resolving deltas: 100% (45472/45472), done. Checking out files: 100% (2221/2221), done.
GitLab
50,108,363
31
Most of our work occurs on GitLab.com (i.e. not a local GitLab installation). If the upstream repo resides on GitHub, is there a way to submit a pull request to upstream? (If forking the upstream repo in a particular way is part of solution, that's ok.)
No. The correct workflow would be forking the upstream project on GitHub to your own namespace. Then use your fork as upstream in your GitLab repository not the origin of your fork. From your GitLab repository you are then pushing changes to your fork (upstream). Then on GitHub you can submit a pull request from your GitHub fork to the origin.
GitLab
37,672,694
31
I am interested in building a wiki for my scientific computing code on gitlab which needs me to write equations and render them in the wiki in gitlab. How to do this. I tried to paste the mathjax rendering script but it doesn't work. Can KaTeX be used anyhow ? $$ \partial_t \int_{\Omega} \mathbf{q} d \Omega = \int_{\partial \Omega} \mathbf{f} ( \mathbf{q}) \cdot \mathbf{n}d \partial \Omega - \int_{\Omega} hg \nabla z_b $$
GitLab supports KaTex from GitLab CE 8.15 using code backticks. Documentation is here and the Relevant discussion are on merge request 8003. Here is the current way to use equations in GitLab
GitLab
35,259,660
31
I try to install gitlab on debian with this turotial: https://github.com/gitlabhq/gitlabhq/blob/master/doc/install/installation.md I'm at step "Install Gems" and try to run: sudo -u git -H bundle install --deployment --without development test postgres aws i get this echo: Fetching source index from https://rubygems.org/ Could not find modernizr-2.6.2 in any of the sources I don't find a solution for this error I run it as root as well. Thanks for help.
I ran into this same problem a few minutes ago. Looks like the classy folks behind Modernizr's Rubygem yanked the most recent versions. You can download the latest gem (Modernizr-2.5.2 as required in the docs there) running the following command inside your /home/git/gitlab directory: wget http://rubygems.org/downloads/modernizr-2.6.2.gem Then, go ahead and run gem install modernizr (without changing directories) and the utility will search in the local directory for the gem file before trying to fetch it remotely. This is the gem we're looking for. NOTE: It looks like some people are still having problems with this solution, so something else we can do is replace a few lines in Gemfile and Gemfile.lock (both on /home/git/gitlab), switching modernizr for modernizr-rails: in Gemfile, line 164, change "modernizr", "2.6.2" to "modernizr-rails", "2.7.1" in Gemfile.lock, line 292, change modernizr (2.6.2) to modernizr-rails (2.7.1) in Gemfile.lock, line 626, change modernizr (= 2.6.2) to modernizr-rails (= 2.7.1) This second solution is thanks to csj4032 on Github.
GitLab
22,825,497
31
By default gitlab has the next configuration in gitlab.yml : email: from: [email protected] host: gitlabhq.com but, I need to specify other variables (host, port, user, password, etc) to use another mail server. How I do that?
Now it is totally different in Gitlab 5.2+. It is in "/home/git/gitlab/config/initializers/smtp_settings.rb.sample" and we just need to follow the instructions in that.
GitLab
10,690,255
31
Trying to login from docker to gitlab using the command: sudo docker login registry.gitlab.com?private_token=XXX But I still have the following error message: Error response from daemon: Get https://registry.gitlab.com/v2/: unauthorized: HTTP Basic: Access denied\nYou must use a personal access token with 'api' scope for Git over HTTP.\nYou can generate one at https://gitlab.com/-/profile/personal_access_tokens The token has the right access I doubled checked... I am rather new to docker, any hint/help? thanks!
The correct command line (that works in my case at least) was: docker login registry.example.com -u <your_username> -p <your_personal_access_token>
GitLab
65,072,379
30
The title says it all. I took a look into GitLab docs but couldn't find clear-cut solution to this. How do I add image to readme on GitLab ? Image that's within the repo.
You can use ![image info](images/image.png) where images could be a directory in your project structure.
GitLab
59,738,918
30
How to change the gitlab multi runner build path. in my server it have /home/gitlab-runner/builds. I want to change this path to my secondary HDD that is mounted in the same server.
You can change your runners build path by adjusting the config.toml. In the [[runners]] section add or change the builds_dir directory. For further reference on runner configuration you can check out the documentation here.
GitLab
41,853,100
30
I cloned a project and ran git checkout -b develop. When I run git flow feature start feature_name it gives me the following error: Fatal: Not a gitflow-enabled repo yet. Please run 'git flow init' first. can any one help me?
I got it working by doing the steps mentioned by jpfl @ answers.atlassian.com: Although this is an old post, just wanted to add to this since I've gotten stuck on this same error. Was able to resolve by doing the following: Open the .git\config file OR Repository -> Repository Settings -> Remotes -> Edit Config File (Sourcetree 2.7.6) Remove all the [gitflow * entries and save the file Close and re-open SourceTree In the main menu, go to Repository > Git Flow > Initialise Repository (should be enabled now)
GitLab
36,843,062
30
There is an on-premise instance of gitlab installed. There are Visual Studio projects in this instance. What is the easiest way of connecting Visual Studio 2015 to one of the projects? With GitHub, you can do it by selecting "Connect to GitHub" as on the following picture: and then pasting the repository url. There is no GitLab option in the drop down. What is the easiest way of configuring Visual Studio 2015 to work with a solution from gitlab repository? By work I mean to have your usual source control bindings to the repository. Note, that this question is probably useful in more general context of connecting to any git repository that is not GitHub, and does not have direct support with built-in Visual Studio menu, not just to GitLab repository.
First, get the clone using command line: git clone <repository url> Then in Visual Studio, in the Team Explorer pane, select the connect button and look for Local Git Repositories "tab": Press Add, as indicated on the picture, and select the folder you cloned your repository too. When the process finishes, you can double-click the added repo to "connect" to it, and then select and open a solution it contains. After that follow your usual Visual Studio git workflow.
GitLab
35,167,121
30
I'm trying to use 'cache' in .gitlab-ci.yml (http://doc.gitlab.com/ce/ci/yaml/README.html#cache). My gitlab version is 8.2.1 and my Runner is: $ docker exec -it gitlab-runner gitlab-runner -v gitlab-runner version 0.7.2 (998cf5d) So according to the doc, everything is up to date, but I'm unable to use the cache ;-(. All my files are always deleted. Am I doing something wrong? A cache archive is created, but not passed to the next jobs. Here is my .gitlab-ci.yml $ cat .gitlab-ci.yml stages: - createcache - testcache createcache: type: createcache cache: untracked: true paths: - doc/ script: - touch doc/cache.txt testcache: type: testcache cache: untracked: true paths: - doc/ script: - find . - ls doc/cache.txt Output of the job 'createcache' Running on runner-141d90d4-project-2-concurrent-0 via 849d416b5994... Fetching changes... HEAD is now at 2ffbadb MUST BE REVERTED [...] $ touch doc/cache.txt [...] Archiving cache... INFO[0000] Creating archive cache.tgz ... INFO[0000] Done! Build succeeded. Output of the job 'testcache' Running on runner-141d90d4-project-2-concurrent-0 via 849d416b5994... Fetching changes... Removing doc/cache.txt [...] $ ls doc/cache.txt ls: cannot access doc/cache.txt: No such file or directory ERROR: Build failed with: exit code 1 My workaround My workaround is to manually untar what's in the /cache directory ... I'm pretty sure that's not the correct way to use cache ... $ cat .gitlab-ci.yml stages: - build - test - deploy image: ubuntu:latest before_script: - export CACHE_FILE=`echo ${CI_PROJECT_DIR}/createcache/${CI_BUILD_REF_NAME}/cache.tgz | sed -e "s|/builds|/cache|"` createcache: type: build cache: untracked: true paths: - doc/ script: - find . | grep -v ".git" - mkdir -p doc - touch doc/cache.txt testcache: type: test script: - env - find . | grep -v ".git" - tar xvzf ${CACHE_FILE} - ls doc/cache.txt
https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/327 image: java:openjdk-8-jdk before_script: - export GRADLE_USER_HOME=`pwd`/.gradle cache: paths: - .gradle/wrapper - .gradle/caches build: stage: build script: - ./gradlew assemble test: stage: test script: - ./gradlew check
GitLab
33,940,384
30
I am trying to make use of the variables: keyword documented in the Gitlab CI Documentation here: FROM: https://docs.gitlab.com/ce/ci/yaml/README.html variables This feature requires gitlab-runner with version equal or greater than 0.5.0. GitLab CI allows you to add to .gitlab-ci.yml variables that are set in build environment. The variables are stored in repository and are meant to store non-sensitive project configuration, ie. RAILS_ENV or DATABASE_URL. variables: DATABASE_URL: "postgres://postgres@postgres/my_database" These variables can be later used in all executed commands and scripts. The YAML-defined variables are also set to all created service containers, thus allowing to fine tune them. When I attempt to use it, my builds do not run any stages and are marked successful anyway, a good sign of bad YAML. I pasted my gitlab-ci.yml contents into the LINT tool in the settings area and the output error is: Status: syntax is incorrect Error: variables job: unknown parameter PACKAGE_NAME I'm using my YAML syntax the same as the docs, however it will not work. I'm unable to find any open bugs related to this. Below are my current versions and a sanitized version of my gitlab-ci.yml. Gitlab Version: 7.13.2 Omnibus Gitlab Runner Version: 0.5.2 gitlab-ci.yml (Sanitized) types: - test - build variables: PACKAGE_NAME: "awesome-django-app" PACKAGE_SUMMARY: "Awesome webapp backend." MAJOR_RELEASE: "1" MINOR_RELEASE: "0" PATCH_LEVEL: "0dev" DEV_DB_URL: "db" DEV_SERVER: "pydev.example.com" PROD_SERVER: "pyprod.example.com" TEST_SERVER: "pytest.example.com" envtest: type: test script: - ". ./testbuild.sh" tags: - python2.7 - postgres - linux except: - tags buildrpm: type: build script: - mkdir -p ~/rpmbuild/SOURCES - mkdir -p ~/rpmbuild/SPECS - mkdir -p ~/tarbuild/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL - cp $PACKAGE_NAME.spec ~/rpmbuild/SPECS/. - cp -r * ~/tarbuild/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL/. - cd ~/tarbuild - tar -zcf ~/rpmbuild/SOURCES/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL.tar.gz * - cd ~ - rm -Rf ~/tarbuild - rpmlint -i ~/rpmbuild/SPECS/$PACKAGE_NAME.spec - echo $CI_BUILD_ID - 'rpmbuild -ba ~/rpmbuild/SPECS/$PACKAGE_NAME.spec \ --define="_build_number $CI_BUILD_ID" \ --define="_python_version_min 2.7" \ --define="_version $MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL" \ --define="_package_name $PACKAGE_NAME" \ --define="_summary $SUMMARY"' - scp rpmbuild/RPMS/noarch/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL-$CI_BUILD_ID.noarch.rpm $DEV_SERVER:~/. tags: - python2.7 - postgres - linux - rpm except: - tags Question: How do I use this value properly? Additional Info: Removing this section from the YAML file causes everything to work so the rest of the file is in working order. (Of course undefined variables lead to script errors...) Even just reducing the variables for testing down to just PACKAGE_NAME causes the same break.
The original answer is no longer correct. The original documentation now stands, Now there are more ways as well. Variables can be created from the GUI, API, or by being defined in the .gitlab-ci.yml as well. https://docs.gitlab.com/ce/ci/variables/README.html
GitLab
31,844,861
30
I am a bit confused between Gitlab CI pipeline workflow:rules and job:rules workflow: rules: - if: '$CI_PIPELINE_SOURCE == "push"' - if: '$CI_PIPELINE_SOURCE != "schedule"' and test: stage: test image: image script: - echo "Hello world!" rules: - if: $CI_PIPELINE_SOURCE == "schedule" What happens if both of them were used in the same GitLab-ci YAML file.
With workflow you configure when a pipeline is created while with rules you configure when a job is created. So in your example pipelines are created for pushes but cannot be scheduled while your test job will only run when scheduled. But as workflow rules take precedence over job rules, no pipeline will be created in your example as your rules for workflow and job are mutually exclusive.
GitLab
67,314,497
29
I have a jhipster project which deploy on heroku with gitlab since several months Since yesterday, I can not deploy new version because I have this error FAILURE: Build failed with an exception. 32 * What went wrong: 33 A problem occurred configuring root project 'yvidya'. 34 > Could not resolve all artifacts for configuration ':classpath'. 35 > Could not resolve io.spring.gradle:propdeps-plugin:0.0.10.RELEASE. 36 Required by: 37 project : 38 > Could not resolve io.spring.gradle:propdeps-plugin:0.0.10.RELEASE. 39 > Could not get resource 'http://repo.spring.io/plugins- release/io/spring/gradle/propdeps-plugin/0.0.10.RELEASE/propdeps-plugin-0.0.10.RELEASE.pom'. 40 > Could not GET 'http://repo.spring.io/plugins-release/io/spring/gradle/propdeps-plugin/0.0.10.RELEASE/propdeps-plugin-0.0.10.RELEASE.pom'. Received status code 403 from server: Forbidden Anyone know why this error? and how solve it?
Open your build.gradle file and replace the spring maven repository URL from http with https
GitLab
59,767,726
29
GitLab CI allows adding custom variables to a project. It allows to use a secret variable of type file where I specify a Key that is the variable name and Value that is the content of a file(e.g. content of certificate) Then during execution of the pipeline the content will be saved as a temporary file and calling the variable name will return the path to the created file. Ultimately I need to copy this file to a Docker container that is created when building the project. (docker build ... in the yml) When testing if the variable works, I tried echo $VARIABLE in .gitlab-ci.yml and it works, returns path of temp file. But when doing RUN echo $VARIABLE in the Dockerfile, it is empty. Therefore I also cannot use ADD $VARIABLE /tmp/ which is my goal. Is there a way to solve this and make this file available to the Dockerfile? I am new to Docker and GitLab and not sure where else to look.
Had to use .yml file docker build argument --build-arg VARIABLE_NAME=variable_value and in Dockerfile use ARG VARIABLE_NAME so the Dockerfile knows it needs to use variable from environment.
GitLab
58,939,500
29
git merge --no-ff account-creation Auto-merging package-lock.json CONFLICT (content): Merge conflict in package-lock.json Automatic merge failed; fix conflicts and then commit the result. Any idea regarding this issue ?
As per the docs: Resolving lockfile conflicts Occasionally, two separate npm install will create package locks that cause merge conflicts in source control systems. As of [email protected], these conflicts can be resolved by manually fixing any package.json conflicts, and then running npm install [--package-lock-only] again. npm will automatically resolve any conflicts for you and write a merged package lock that includes all the dependencies from both branches in a reasonable tree. If --package-lock-only is provided, it will do this without also modifying your local node_modules/. To make this process seamless on git, consider installing npm-merge-driver, which will teach git how to do this itself without any user interaction. In short: $ npx npm-merge-driver install -g will let you do this, and even works with [email protected] versions of npm 5, albeit a bit more noisily. Note that if package.json itself conflicts, you will have to resolve that by hand and run npm install manually, even with the merge driver.
GitLab
50,160,311
29
I did commit successfully in my local repository. When I try to do: git push https://gitlab.com/priceinsight/jmt4manager/compare/develop...2-retrieve-list-userrecord# 2-retrieve-list-userrecord -v I got this error: Pushing to https://gitlab.com/priceinsight/jmt4manager/compare/develop...2-retrieve-list-userrecord# fatal: unable to update url base from redirection: asked for: https://gitlab.com/priceinsight/jmt4manager/compare/develop...2-retrieve-list-userrecord#/info/refs?service=git-receive-pack redirect: https://gitlab.com/users/sign_in
The URL you try to push to is not correct. You are trying to push to the URL https://gitlab.com/priceinsight/jmt4manager/compare/develop...2-retrieve-list-userrecord# which is a webpage that compares two branches and not the URL of a repository. The repository would be https://gitlab.com/priceinsight/jmt4manager.
GitLab
43,835,309
29
When using GitLab CI, as well as the gitlab-ci-multi-runner, I'm unable to get internally-started Docker containers to expose their ports to the "host", which is the Docker image in which the build is running. My .gitlab-ci.yml file: test: image: docker stage: test services: - docker:dind script: - APP_CONTAINER_ID=`docker run -d --privileged -p "9143:9143" appropriate/nc nc -l 9143` - netstat -a - docker exec $APP_CONTAINER_ID netstat -a - nc -v localhost 9143 My command: gitlab-ci-multi-runner exec docker --docker-privileged test The output: $ netstat -a Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 runner--project-1-concurrent-0:54664 docker:2375 TIME_WAIT tcp 0 0 runner--project-1-concurrent-0:54666 docker:2375 TIME_WAIT Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path $ docker exec $APP_CONTAINER_ID netstat -a Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:9143 0.0.0.0:* LISTEN Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path $ nc -v localhost 9143 ERROR: Build failed: exit code 1 FATAL: exit code 1 What am I doing wrong here? Original Question Follows - above is a shorter, easier-to-test example I have an application image that listens on port 9143. Its startup and config is managed via docker-compose.yml, and works great on my local machine with docker-compose up - I can access localhost:9143 without issue. However, when running on GitLab CI (the gitlab.com version) via a shared runner, the port doesn't seem to be exposed. The relevant portion of my .gitlab-ci.yml: test: image: craigotis/buildtools:v1 stage: test script: - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com/craigotis/myapp - docker-compose up -d - sleep 60 # a temporary hack to get the logs - docker-compose logs - docker-machine env - docker-compose port app 9143 - netstat -a - docker-compose ps - /usr/local/bin/wait-for-it.sh -h localhost -p 9143 -t 60 - cd mocha - npm i - npm test - docker-compose down The output is: $ docker-compose logs ... app_1 | [Thread-1] INFO spark.webserver.SparkServer - == Spark has ignited ... app_1 | [Thread-1] INFO spark.webserver.SparkServer - >> Listening on 0.0.0.0:9143 app_1 | [Thread-1] INFO org.eclipse.jetty.server.Server - jetty-9.0.z-SNAPSHOT app_1 | [Thread-1] INFO org.eclipse.jetty.server.ServerConnector - Started ServerConnector@6919dc5{HTTP/1.1}{0.0.0.0:9143} ... $ docker-compose port app 9143 0.0.0.0:9143 $ netstat -a Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 runner-e11ae361-project-1925166-concurrent-0:53646 docker:2375 TIME_WAIT tcp 0 0 runner-e11ae361-project-1925166-concurrent-0:53644 docker:2375 TIME_WAIT tcp 0 0 runner-e11ae361-project-1925166-concurrent-0:53642 docker:2375 TIME_WAIT Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path $ docker-compose ps stty: standard input: Not a tty Name Command State Ports ---------------------------------------------------------------------------------------- my_app_1 wait-for-it.sh mysql_serve ... Up 8080/tcp, 0.0.0.0:9143->9143/tcp mysql_server docker-entrypoint.sh --cha ... Up 3306/tcp $ /usr/local/bin/wait-for-it.sh -h localhost -p 9143 -t 60 wait-for-it.sh: waiting 60 seconds for localhost:9143 wait-for-it.sh: timeout occurred after waiting 60 seconds for localhost:9143 The contents of my docker-compose.yml: version: '2' networks: app_net: driver: bridge services: app: image: registry.gitlab.com/craigotis/myapp:latest depends_on: - "db" networks: - app_net command: wait-for-it.sh mysql_server:3306 -t 60 -- java -jar /opt/app*.jar ports: - "9143:9143" db: image: mysql:latest networks: - app_net container_name: mysql_server environment: - MYSQL_ALLOW_EMPTY_PASSWORD=true It seems like my application container is listening on 9143, and it's properly exposed to the shared GitLab runner, but it doesn't seem to actually be exposed. It works fine on my local machine - is there some special workaround/tweak I need to make this work inside a Docker container running on GitLab?
When using docker:dind a container is created and your docker-compose containers get setup within it. It exposes the ports to localhost within the docker:dind container. You cannot access this as localhost from the environment that your code is executing in. A hostname of docker is setup for you to reference this docker:dind container. You can check by using cat /etc/hosts. Instead of referencing localhost:9143 you should use docker:9143.
GitLab
41,559,660
29
We have a gitlab setup at our office, and we have somewhat around 100-150 project each week to create over there, while Admin wants to keep the control of creating repos and assigning teams to it, it seems quite a bit of task for anyone to create that many repos every week. Is there a way to create repo on Gitlab using CLI - I won't mind if i have to use ssh for it.
gitlab-cli is no longer maintained, the author references the Gitlab module to be used instead - it also includes a CLI tool. For your specific request - namely creating a project on the command line, use the following command: gitlab create_project "YOUR_PROJECT_NAME" "{namespace_id: 'YOUR_NUMERIC_GROUP_ID'}" Be sure to use the option namespace_id and not group_id! If you are not sure what your GROUP_ID is, you can use gitlab groups | grep YOUR_GROUP_NAME to find out. The parameters for each command can be inferred from the API documentation. Any non-scalar valued parameter has to be encoded in an inline YAML syntax (as above).
GitLab
19,585,211
29
I know it is possible to fetch then use checkout with the path/to/file to download that specific file. My issue is that I have a 1 MB data cap per day and git fetch will download all the data anyway even if it does not save them to disc until I use git checkout. I still used my data Is my understanding of how git fetch/checkout correct? is there a way to download a specific file only to see if there is a new version before proceeding with the download.
Gitlab has a rest API for that. You can GET a file from repository with curl: curl https://gitlab.com/api/v4/projects/:id/repository/files/:filename\?ref\=:ref For example: curl https://gitlab.com/api/v4/projects/12949323/repository/files/.gitignore\?ref\=master If your repository isn't public you also need to provide an access token by adding --header 'Private-Token: <your_access_token>'. Links: You can check how to find repository api id here. Api documentation More on tokens There is also a python library that uses this api. Note that this is GitLab specific solution and won't work for other hostings.
GitLab
56,943,327
28
Consider the following gilab-ci.yml script: stages: - build_for_ui_automation - independent_job variables: LC_ALL: "en_US.UTF-8" LANG: "en_US.UTF-8" before_script: - gem install bundler - bundle install build_for_ui_automation: dependencies: [] stage: build_for_ui_automation artifacts: paths: - fastlane/screenshots - fastlane/logs - fastlane/test_output - fastlane/report.xml script: - bundle exec fastlane ui_automation tags: - ios only: - schedules allow_failure: false # This should be added and trigerred independently from "build_for_ui_automation" independent_job: dependencies: [] stage: independent_job artifacts: paths: - fastlane/screenshots - fastlane/logs - fastlane/test_output - fastlane/report.xml script: - bundle exec fastlane independent_job tags: - ios only: - schedules allow_failure: false I'd like to be able to schedule these two jobs independently, but following the rules: build_for_ui_automation runs every day at 5 AM independent_job runs every day at 5 PM However, with the current setup I can only trigger the whole pipeline, which will go through both jobs in sequence. How can I have a schedule triggering only a single job?
Note - This answer used, only-except of GitLab CI to manipulate what job gets added to the schedule. However, as of today, GitLab has stopped active maintenance of the same commands and suggests we use rules instead. Here's the link. I have modified the original answer to use rules and tested the working. To build off @Naor Tedgi's answer, you can define a variable in your pipeline schedules. For example, set SCHEDULE_TYPE = "build_ui" in the schedule for build_for_ui_automation and SCHEDULE_TYPE = "independent" in the schedule for independent_job. Then your .gitlab-ci.yml file could be modified as: stages: - build_for_ui_automation - independent_job variables: LC_ALL: "en_US.UTF-8" LANG: "en_US.UTF-8" before_script: - gem install bundler - bundle install build_for_ui_automation: dependencies: [] stage: build_for_ui_automation artifacts: paths: - fastlane/screenshots - fastlane/logs - fastlane/test_output - fastlane/report.xml script: - bundle exec fastlane ui_automation tags: - ios rules: - if: '$CI_PIPELINE_SOURCE == "schedule" && $SCHEDULE_TYPE == "build_ui"' when: always variables: SCHEDULE_TYPE: "build_ui" ANOTHER_VARIABLE: "dummy" allow_failure: false # This should be added and trigerred independently from "build_for_ui_automation" independent_job: dependencies: [] stage: independent_job artifacts: paths: - fastlane/screenshots - fastlane/logs - fastlane/test_output - fastlane/report.xml script: - bundle exec fastlane independent_job tags: - ios rules: - if: '$CI_PIPELINE_SOURCE == "schedule" && $SCHEDULE_TYPE == "independent"' when: always variables: SCHEDULE_TYPE: "independent" ANOTHER_VARIABLE: "dummy123" allow_failure: false where note the syntax change in the only sections to execute the jobs only during schedules and when the schedule variable is matching.
GitLab
56,686,864
28
Is it possible to run a pipeline on a specific runner? (not using tags) Is it feasible to use environments, or even gitlab runner exec maybe? Scenario: Have an existing project with multiple runners already attached to it (specific project token used to register the runner) and has it's own associated tags (so can't change these either). I'm adding a new runner, however need to test it first to ensure it works, but I need to force the pipeline to build on this machine, without changing any tags, or specific project of the runner.
You have two mechanisms by which you can attempt to isolate a new runner for testing: use tags and private runner attachment (already called out) use the gitlab-runner exec verb directly on the runner canary the runner for a single build only Option 1 use tags and private runner attachment (already called out). To further expand on this... even in a draconian setup where you can't alter tags and whatnot -- you can always FORK the project. In your new private fork, you can go to Settings >> CI/CD and override the .gitlab-ci.yml file in the Custom CI Configuration Path under the General Pipelines Settings. This allows you to git cp .gitlab-ci.yml .mycustomgitlab-ci.yml and then simply git add/git commit/git push and you're in business. Opinion: If you cannot use the mechanisms in place to adjust the tags on the questionable runner and isolate a new forked project, this isn't a technical problem, it is a political problem. Option 2 Gitlab-runner exec.... Assuming you're using a shell gitlab runner... SSH to the questionable gitlab runner box you're trying to test Clone the repo for the project in question to ... say ... /tmp/myrepo Execute Gitlab-Runner: /path/to/gitlab-runner exec shell {.gitlab-ci.yml target} See https://docs.gitlab.com/runner/commands/#gitlab-runner-exec and a blog about it at https://substrakt.com/how-to-debug-gitlab-ci-builds-locally/ Option 3 Canary the gitlab-runner for a single build. You can spin up the gitlab-runner process to do N number of builds and then go back offline. See: https://docs.gitlab.com/runner/commands/#gitlab-runner-run-single ... This is not zero-impact, but will definitely limit the blast radius of any problems.
GitLab
47,059,975
28
The goal is to have everyone get a notification for every failed pipeline (at their discretion). Currently, any of us can run a pipeline on this project branch, and the creator of the pipeline gets an email, no one else does. I have tried setting the notification level to watch and custom (with failed pipelines checked) at project, group and global levels without success. The help page regarding notifications says the failed pipeline checkbox for custom notification levels notifies the author of the pipeline (which is the behavior I am experiencing). Is there any way to allow multiple people to get notified of a failed pipeline? Using Gitlab CE v10.0 Have Group (security::internal) Group has Project (security::internal) Project has scheduled pipleine (runs nighly) Pipeline runs integration tests (purposely failing) Schedule created by me (schedules have to have an owner) When the automated pipeline runs and fails, I get an email (good) No one else gets email (bad)
Have a look at following Integration: Project -> Settings -> Integrations -> Pipelines emails
GitLab
46,472,631
28
I am trying to migrate an GitLab setup from 7.8.2 to 7.12.2. I am not really sure how to go about this. I have installed a new box, on Ubuntu 14.04.2. Now I would really like to just export the old user/group database and import it on the new server, then copy all the repositories from the old server to the new one. And tell the users to start using the new one. I do not know which database my new gitlab installation uses, neither the old one. I have been up and down the gitlab documentation, but cannot find sufficient information on how to migrate from one server to another. I followed the instructions on https://about.gitlab.com/downloads/ for ubuntu, and all seems to work fine. I am looking for a way to export the users/groups from the old gitlab box and import it on the new gitlab box. and then just copy all the repositories from the old box to the new one. Any assistance? I know next to nothing about gitlab :(
I would take the following steps Find out if gitlab is installed by hand or with gitlab-omnibus. This you need to know for the exact backup and update steps. Do a backup of the old version just to be safe Update the current 7.8.2 instance to 7.12.2 instance by following the update guideline Back up the newly updated gitlab system Restore the backup on the new system Backup & restore documentation can be found here
GitLab
31,534,293
28
I was able to ignore directory, file changes using the following syntax. build: script: npm run build except: changes: - "*.md" - "src/**/*.ts" With this configuration build job is going to run except git changes include only *.md extension file or *.ts files in src directory. They're ignored. But then they deprecated this only:changes, except:changes syntax and warned to use rules syntax instead. Unfortunately, I cannot seem to find how to ignore directory, file changes using this new syntax.
Based on documents of rules: changes, it seems when: never must be used with rules: changes syntax. Like the following: build: script: npm run build rules: - changes: - "*.md" - "src/**/*.ts" when: never - when: always If changed paths in the repository match above regular expressions then the job will not be added to any pipeline. In other cases the job always will be run by pipelines.
GitLab
62,689,235
27
I'm running my Gitlab with Docker and I forgot my Gitlab root password. How to change it ?
I found a way to make it work. First connect to your Gitlab with command line search for your Docker CONTAINER_ID docker ps -all eg docker exec -it d0bbe0e1e3db bash <-- with your CONTAINER_ID $ gitlab-rails console -e production user = User.where(id: 1).first user.password = 'your secret' user.password_confirmation = 'your secret' user.save exit
GitLab
55,747,402
27
I'm using the following command to remove a local branch with force delete option: $ git branch -D <branch_name> My question is, If I delete a local branch that had an upstream set and then do a normal push, it won't delete the remote branch right? What should I do in this situation? [NOTE]: "-D" is force delete option. I want to delete the local branch and keep the remote branch on origin.
git will only delete your local branch, please keep in mind that local and remote branches actually have nothing to do with each other. They are completely separate objects in Git. Even if you've established a tracking connection (which you should for most scenarios), this still does not mean that deleting one would delete the other, too! If you want any branch item to be deleted, you need to delete it explicitly. Deleting local branches in Git git branch -d <branch_name> using a capital -D is like a "force" version of -d. If the branch isn't fully merged you'll get an error if you use the lowercase version. This again has no relevance to the remote branches and will only delete your local branch. Deleting remote branches in Git git push origin --delete <branch_name> so to your question If I delete a local branch that had an upstream set and then do a normal push, it won't delete the remote branch right? You are correct it will not delete the remote branch.
GitLab
51,295,388
27
I have a solution with several .NET projects in it. I use GitLab, not self-hosted, for version control and would like to start using their CI tools as well. I have added the following .gitlab-ci.yml file to my root: stages: - build - test build_job: stage: build script: - 'echo building...' - 'msbuild.exe Bizio.sln' except: - tags test_job: stage: test script: - 'echo: testing...' - 'msbuild.exe Bizio.sln' - 'dir /s /b *.Tests.dll | findstr /r Tests\\bin\\ > tests_list.txt' - 'for /f %%f in (tests_list.txt) do mstest.exe /testcontainer: "%%f"' except: - tags The build stage always fails because it doesn't know what msbuild is. The exact error is: /bin/bash: line 61: msbuild.exe: command not found After some investigating, I've figured out that I'm using a shared runner. Here is the entire output from the job run: Running with gitlab-runner 10.6.0-rc1 (0a9d5de9) on docker-auto-scale 72989761 Using Docker executor with image ruby:2.5 ... Pulling docker image ruby:2.5 ... Using docker image sha256:bae0455cb2b9010f134a2da3a1fba9d217506beec2d41950d151e12a3112c418 for ruby:2.5 ... Running on runner-72989761-project-1239128-concurrent-0 via runner-72989761-srm-1520985217-1a689f37... Cloning repository... Cloning into '/builds/hyjynx-studios/bizio'... Checking out bc8085a4 as master... Skipping Git submodules setup $ echo building... building... $ C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe Bizio.sln /bin/bash: line 61: msbuild.exe: command not found ERROR: Job failed: exit code 1 It looks like the shared runner I have is using a Docker image for Ruby, which seems wrong. I don't know how I can change that or select a different one that can be used for .NET. After some further investigating I'm getting worried that I'll have to jump through a lot of hoops to get what I want, like using an Azure VM to host a GitLab Runner that can build .NET apps. What do I need to do to use GitLab's CI pipelines to build my .NET solution using a non-self-hosted GitLab instance?
You should be able to setup your own shared runner on a machine with the Framework 4 build tools on it (either using a Docker image, like microsoft/dotnet-framework-build, or just your native machine). The simplest case to get going is using your own desktop, where you know your solution already builds. (Since using Docker images for the build is absolutely possible, but involves all of the extra steps of making sure you have docker working on your machine). Download gitlab-runner on your computer from https://docs.gitlab.com/runner/install/windows.html Create a directory on computer (C:\gitlab-runner) Download the latest binary x86 or x64 to that folder Rename the binary to "gitlab-runner.exe" Get a gitlab-ci token for your runner Probably the easiest way to do this is to go to your project in gitlab.com and go to Settings -> CI/CD and expand General Pipeline Settings. In the Runner Token section, click the Reveal Value button to show the token value. You will need this during the runner registration step. Register the gitlab runner according to Registering Runners - Windows Open an elevated command prompt (Run as administrator) cd to c:\gitlab-runner type gitlab-runner register The register prompts will take you through the steps to register the runner, but in a nutshell, you will be putting in gitlab.com as your coordinator URL, entering the token from your project giving your runner a name tagging your runner (so that you can associate it with projects that it is capable of building, testing, etc - for simplicity, you can skip tags now) allowing it to run for untagged jobs (again, simplicity, say true) lock runner to current project (simplicity, say true) and choosing the executor (enter shell, which is basically saying use the Windows Command-line) Install gitlab-runner as a service, so that it is mostly always checking for work to do In your command prompt, type gitlab-runner install Then type gitlab-runner start (Now, if you go to Services, you will see gitlab-runner listed, and it should be running - Just when when/if the runner crashes, you should go to Services to restart it) Whew. Now that runner is setup, it should be activated when you push a commit or a merge is requested. If you are having issues still with the .gitlab-ci.yml file building properly, you can debug it locally (without having to keep triggering it through gitlab.com) by going to your solution folder in the command line and then executing c:\gitlab-runner\gitlab-runner build (To test the build step, for example). If the build step has problem finding your solution file, you might want to try changing it from 'msbuild.exe Bizio.sln' to 'msbuild.exe .\Bizio.sln'
GitLab
49,268,560
27
What's the meaning of "Allowed to push" and "Allowed to merge" in Gitlab protected branches
Allowed to push means just that - the user is allowed to git push to the branch. Allowed to merge means that the user is allowed to accept merge requests into that branch.
GitLab
41,782,553
27
When I run : git push, there is exist error like this : remote: Access denied fatal: unable to access 'https://gitlab.com/myname/mysystem.git/': The requested URL returned error: 403 Is there any people who can help me?
For Windows users, check this unbelievable easy solution, which works for me: Go to Windows Credential Manager (press Windows Key and type 'credential') to edit the git entry under Windows Credentials. Replace old password with the new one.
GitLab
41,263,662
27
I have GitLab CE (v8.5 at least) installed on my server. I would like to integrate it with sonarqube so that merge requests shows any issues in the comment section. Has anyone integrated these 2 systems successfully? At the moment, only sonarqube plugin I found is the following but I'm not able to successfully integrate it with GitLab. https://gitlab.talanlabs.com/gabriel-allaigre/sonar-gitlab-plugin I used a docker container for sonarqube (v5.5) and copied the plugin into extensions directory. Configured gitlab user token and gitlab uri in the plugin's setting page in sonarqube. I'm using GitLab CI for continuous integration and I have the following build job for sonarqube (using gradle) sh gradlew sonarqube -Psonar.analysis.mode=preview -Psonar.issuesReport.console.enable=true \ -Psonar.gitlab.commit_sha=$CI_BUILD_REF -Psonar.gitlab.ref_name=$CI_BUILD_REF_NAME \ -Psonar.gitlab.project_id=$CI_PROJECT_ID But, I'm not sure what to after this. Couple of questions: What happens when a merge request does not exist yet? In my git workflow, users will submit a merge request after they're done working on their branch. So, how will this plugin know which merge request to update? Right now I have the sonarqube valiation task set to be running only on master branch. I think this will need to be changed to user branches too, right? I did try submitting a merge request, but I didn't see any comments being added. I think I'm missing some configuration or a process. Really appreciate if you can help point me to the right direction.
I had the same problem than yours. Comments were not showing in the GitLab MR. I made it work with two fixes: make sure the preview mode is used. If it is not, the issues are not reported to GitLab for issues to appear as GitLab comments, they have to be "new" issues. If you launched an analysis of your project before pushing to GitLab, the issues will not be considered as new by SonarQube, and no comment will be added to the MR. If this does not solve your problem, try cloning the plugin repo, adding traces to the code (CommitIssuePostJob.java is the place to look), package the jar with maven and deploy the patched jar to your Sonar installation. That is how I saw that I had no new issues to report.
GitLab
37,929,055
27
I'd like to get a list of the issues for the project YYYYYY and a username XXXXXX. curl --header "PRIVATE-TOKEN: myownprivatetoken" "https://gitlab.com/api/v3/projects/YYYYYY/issues" curl --header "PRIVATE-TOKEN: myownprivatetoken" --header "SUDO: XXXXXX" "https://gitlab.com/api/v3/projects/YYYYYY/issues" curl --header "PRIVATE-TOKEN: myownprivatetoken" "https://gitlab.com/api/v3/XXXXXX/projects/YYYYYY/issues" But they only return: {"message":"404 Project Not Found"} or <html><body>You are being <a href="https://gitlab.com/users/sign_in">redirected</a>.</body></html> It seems to me that I have misinterpreted the API docs at http://doc.gitlab.com/ce/api/issues.html and http://doc.gitlab.com/ce/api/README.html . So what am I doing wrong?
The documentation tell you this about how to retrieve issues from a project: GET /projects/:id/issues And you tried: curl --header "PRIVATE-TOKEN: xxx" "https://gitlab.com/api/v3/projects/YYYYYY/issues" This is correct, but the parameter you give YYYYYY has to be the project id, so it has to be an integer, not text with the project name or path. You need to use something like : curl --header "PRIVATE-TOKEN: xxx" "https://gitlab.com/api/v3/projects/234/issues" Where 234 is the id of your project. To get this integer id of your project, simply do a : curl --header "PRIVATE-TOKEN: xxx" "https://gitlab.com/api/v3/projects This will list all your projects and will give you the unique integer identifier of a project in the id field: [ { "id": 4, <-------- //This one "name": "my super mega project", "description": null, .....
GitLab
31,805,041
27
Is it dangerous to keep code in gitlab and github? I heard it is quite safe to commit our code to gitlab and github. The reason is every code is hashed and it is nearly impossible for everyone to alter the code without using git tool. Is this true?
As I mentioned in "Why does Git use a cryptographic hash function?", it is "safe" in term of data integrity (Linus Torvalds, 2007): We check checksums that is considered cryptographically secure. Nobody has been able to break SHA-1, but the point is, SHA-1 as far as git is concerned, isn't even a security feature. It's purely a consistency check. The security parts are elsewhere. A lot of people assume since git uses SHA-1 and SHA-1 is used for cryptographically secure stuff, they think that it's a huge security feature. It has nothing at all to do with security, it's just the best hash you can get. Having a good hash is good for being able to trust your data This has nothing to do with: privacy (which doesn't depend on Git itself, but on the Git hosting server, like gitHub or BitBucket) user identity (to really be sure about a commit user, as Thilo comments, you can sign commits (see "A Git Horror Story: Repository Integrity With Signed Commits") The OP add: what I mean is the owner of gitlab or github may steal our code This is a question of trust: Does the git hosting server have access to your code if it is in a private repo? Technically yes. Will they access your private code? As mentioned in "Can third party hosts be trusted for closed-source/private source code management?", nothing prevents them to. Yet, many startups have their private code on, for instance, GitHub. If you have real confidentiality concern, then it is important you keep the ownership of the all codebase, including the server where it is stored (meaning having your own Git repo hosting server).
GitLab
30,296,072
27
How can I make Side-by-side be the default diff for my GitLab installation or project or profile?
Update February 2016 The issue is now at issue CE 3071 and... was resolved by commit 9fdd605! A cookie now retain your diff view choice, with GitLab 8.5.0+. Original answer (February 2015) That doesn't seem to be possible right now, and can be voted up at the suggestion 7082397 Remember side-by-side diff choice Right now when I want to review code, I have to choose side-by-side diff each time I open a Merge Request because that choice is not stored/saved anywhere. Either store my last choice/change of viewer globally somewhere and use it on all diff views from that point on. Or let me have a config option on my account where I can specify my preferred choice. The side-by-side diff view was introduced with GitLab 6.4 (Dec. 2013) with commit b51c2c8.
GitLab
28,180,650
27
I've got root access to our production server and I want to deploy the latest version in git to the server but I'm running into the error below when I "git pull" on the folder I want to update. I've browsed around a bit, but can't find a clear answer on what to do.. The staging server runs on the same machine, but just in a different folder and when I pull on that folder it all goes fine. I'm not very experienced when it comes to Linux, so please help me out with a clear answer on how to fix :-) Otherwise I have access to anything I need p.s. This has worked in the past, so I'm assuming it's got something to do with the SSH key Error: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: POSSIBLE DNS SPOOFING DETECTED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ The ECDSA host key for www.site.org has changed, and the key for the corresponding IP address x.x.x.x is unknown. This could either mean that DNS SPOOFING is happening or the IP address for the host and its host key have changed at the same time. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. The fingerprint for the ECDSA key sent by the remote host is ************* Please contact your system administrator. Add correct host key in /root/.ssh/known_hosts to get rid of this message. Offending ECDSA key in /root/.ssh/known_hosts:1 remove with: ssh-keygen -f "/root/.ssh/known_hosts" -R gitlab.site.org ECDSA host key for gitlab.site.org has changed and you have requested strict checking. Host key verification failed.
In the log you see the following text: (...) Please contact your system administrator. Add correct host key in /root/.ssh/known_hosts to get rid of this message. Offending ECDSA key in /root/.ssh/known_hosts:1 remove with: ssh-keygen -f "/root/.ssh/known_hosts" -R gitlab.site.org ECDSA host key for gitlab.site.org has changed and you have requested strict checking. Host key verification failed. So it is a matter of performing the command that is suggested there: ssh-keygen -f "/root/.ssh/known_hosts" -R gitlab.site.org
GitLab
21,087,695
27
I've just setup gitlab, but I'm completely lost with regards to admin user. The wiki seems silent about this topic, and google hasn't been of help either. So, how do I setup admin users with gitlab on LDAP authentication?
You can also set admin permissions to a user by doing something like this in the rails console: User.find_by_email("[email protected]") do |i| i.admin = true i.save end
GitLab
11,761,396
27
I have a problem... My code in Gitlab, Pipeline in Azure DevOps. I use classic editor. When i start pipeline i have error "fatal: unable to access 'fatal: unable to access 'https://my.repos.example:***.git/': SSL certificate problem: unable to get local issuer certificate" Please help me!
For me this issue came up when attempting to clone a repository through Visual Studio 2019. Upon selecting the Azure option in the repository menu I then picked the codebase I wanted to clone. After this step I was prompted with an error of: "SSL certificate problem: unable to get local issuer certificate" I ran the git command setting up the global ssl backend: > git config --global http.sslbackend schannel And the next time I tried the steps listed above, all was well.
GitLab
67,976,050
26
How do I prevent a gitlab ci pipeline being triggered when I add a git tag? I'm running this command locally (as opposed to within a gitlab-ci job) git tag -a "xyz" and then pushing the tag; and this triggers various pipelines. I want to exclude some of those pipelines from running. I'm trying variations on ideas from questions such as this; that question is using only, I'm wanting to exclude, so I'm trying except. The answers there have two variants, one with refs one without. build: # ... my work here ... except: - tags build: # ... my work here ... except: refs: - tags Neither seem to have any effect; I add a tag, the build still happens. My understanding may be completely awry here as there seem to be three possible meanings of the word tags and when reading docs or examples I'm not always sure which meaning is applicable: Git tags applied using git tag Gitlab CI tags used to determine which runners pick a job The ref identifier of a commit used to trigger a pipeline via the REST API. This is usually a branch name, but could be a git tag. I'm interested in controlling what happens if the first case. It does seem clear from comments so far that "except: -tags" is not relevant to my case, so is there any approach that does work?
It looks like GitLab recommends using rules instead of except as per the documentation only and except are not being actively developed. rules is the preferred keyword to control when to add jobs to pipelines. So it'd be your_job: stage: your_stage script: - echo "Hello" rules: - if: $CI_COMMIT_TAG when: never - when: always
GitLab
60,351,496
26
I am currently using GitLab API to return all projects within a group. The question I have is, how do I return all projects if there are over 100 projects in the group? The curl command I'm using is curl --header "PRIVATE-TOKEN: **********" http://gitlab.example.com/api/v4/groups/myGroup/projects?per_page=100&page=1 I understand that the default page=1 and the max per_page=100 so what do I do if there are over 100 projects? If I set page=2, it just returns all the projects after the first 100.
Check the response for the X-Total-Pages header. As long as page is smaller than total pages, you have to call the api again and increment the page variable. https://docs.gitlab.com/ee/api/rest/index.html#pagination-link-header
GitLab
47,414,024
26
I am a big fan of the __TOC__ that creates a table of content on a Wikimedia page. Whenever you write things like this on a Wikimedia page: This is a page for my project ## Credits ## bla bla ## License ## bla bla __TOC_ automagically creates a table of content that allows you to navigate through inner links of the page. I noticed that this is somehow possible in GitHub wiki pages: How do I create some kind of table of content in GitHub wiki? by using different tricks. However, I miss this feature in GitLab wiki pages. Does it provide such functionality?
So this exists! I finally found a Merge Request in in the GitLab Community Edition: Replace Gollum [[_TOC_]] tag with result of TableOfContentsFilter As its name describes, to have a table of contents you need to write the following: [[_TOC_]] All together, you can write something like: This is a page for my project [[_TOC_]] ## Credits bla bla ## License bla bla and will show like this: This is available from the GitLab 8.6 release as described in its milestone.
GitLab
47,154,661
26
I am trying to Google it for few hours, but can't find it. I have Java/Spring application (+MySQL if it matters) and I am looking to create CI for that. I know what to do and how: I know that I have to move my Git repo to Gitlab. Push to repo will trigger CI script. Gitlab will build my docker image into Gitlab Docker Registry. Question is: What do I have to do to force docker compose on my VPS to pull the new image from Gitlab and restart the server? I know (correct me if I am wrong) that on my VPS I should run docker-compose pull && docker-compose up inside my app folder, but I have literally no idea how to make it automatically with Gitlab?
What do I have to do to force docker compose on my VPS to pull the new image from Gitlab and restart the server? @m-uu, you don't need restart the server at all, just do docker-compose up to pull new image and restart service I know (correct me if I am wrong) that on my VPS I should run docker-compose pull && docker-compose up inside my app folder, but I have literally no idea how to make it automatically with Gitlab? Yes, you are on the right way. Look at my Gitlab CI configuration file, I think it doesn't difficult to change it for Java project. Just give you ideas how to build, push to your registry and deploy an image to your server. One thing you need to do is generate SSH keys and push public to server (.ssh/authorized_keys) and private to GITLAB pipeline secret variable (https://docs.gitlab.com/ee/ci/variables/#secret-variables) cache: key: "cache" paths: - junte-api stages: - build - build_image - deploy build: image: golang:1.7 stage: build script: - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )' - eval $(ssh-agent -s) - echo "$SSH_PRIVATE_KEY" > ~/key && chmod 600 ~/key - ssh-add ~/key - mkdir -p ~/.ssh - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config' - go get -u github.com/kardianos/govendor - mkdir -p $GOPATH/src/github.com/junte/junte-api - mv * $GOPATH/src/github.com/junte/junte-api - cd $GOPATH/src/github.com/junte/junte-api - govendor sync - go build -o junte-api - cd - - cp $GOPATH/src/github.com/junte/junte-api . build_image: image: docker:latest stage: build_image script: - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY - docker build -t $CI_REGISTRY_IMAGE . - docker push $CI_REGISTRY_IMAGE deploy-dev: stage: deploy image: junte/ssh-agent variables: # should be set up at Gitlab CI env vars SSH_PRIVATE_KEY: $SSH_DEV_PRIVATE_KEY script: # copy docker-compose yml to server - scp docker-compose.dev.yml root@SERVER_IP:/home/junte/junte-api/ # login to gitlab registry - ssh root@SERVER_IP docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY # then we cd to folder with docker-compose, run docker-compose pull to update images, and run services with `docker-compose up -d` - ssh root@SERVER_IP "cd /home/junte/junte-api/ && docker-compose -f docker-compose.dev.yml pull api-dev && HOME=/home/dev docker-compose -f docker-compose.dev.yml up -d" environment: name: dev only: - dev You also need Gitlab runner with Docker support. How install it look at in Gitlab doc, please. About stages: build - just change it to build what you need build_image - very simple, just login to gitlab registry, build new image and push it to registry. Look at cache part, it need to cache files between stages and can be different for you. deploy-dev - that part more about what you asked. Here first 6 commands just install ssh and create your private key file to have access to your VPS. Just copy it and add your SSH_PRIVATE_KEY to secret vars in Gitlab UI. Last 3 SSH commands more interesting for you.
GitLab
44,545,635
26
I've set up my own Gitlab server with one project and a Gitlab runner configured for it. I'm new to continuous integration server and therefore don't know how to accomplish the following. Every time I commit to the master branch of my project I would like to deploy the repository to another server and run two shell-commands there (npm installand forever restartall) How would I do this? Do I need a runner on the machine which the project is deployed to as well?
You could use gitlab-ci and gitlab-runner [runners.ssh] to deploy to single or mutiple servers. the flow: (git_project with yml file) --> (gitlab && gitlab-ci) --> (gitlabrunner) ---runners.ssh---> (deployed_server,[deploye_server2]) you need register gitlab-runner to gitlab-ci and set the tag to delpoyServer on gitlab web . /etc/gitlab-runner/config.toml: [[runners]] url = "http://your.gitlab.server/ci" token = "1ba879596cf3ff778ee744e6decedd" name = "deployServer1" limit = 1 executor = "ssh" builds_dir = "/data/git_build" [runners.ssh] user = "you_user_name" host = "${the_destionation_of_deployServer_IP1}" port = "22" identity_file = "/home/you_user_name/.ssh/id_rsa" [[runners]] url = "http://your.gitlab.server/ci" token = "1ba879596cf3ff778ee744e6decedd" name = "deployServer2" limit = 1 executor = "ssh" builds_dir = "/data/git_build" [runners.ssh] user = "you_user_name" host = "${the_destionation_of_deployServer_IP2}" port = "22" identity_file = "/home/you_user_name/.ssh/id_rsa" the runner.ssh means, the runner will login into ${the_destionation_of_deployServer_IP1} and ${the_destionation_of_deployServer_IP2}, then clone the project to builds_dir. write the yml file for example: .gitlab-ci.yml job_deploy: stage: deploy tags: delpoyServer1 script: - npm install && forever restartall job_deploy: stage: deploy tags: delpoyServer2 script: - npm install && forever restartall set the your gitlab-runner to delpoyServer1 and delpoyServer2tags in 'http://your.gitlab.server/ci/admin/runners' when you push you code to gitlab the gitlab-ci server will parser your .gitlab-ci.yml file in your project, choose a runner with the tags: deployServer1 or deployServer2; the gitlab-runnerwith the deployServer1 tag will login into ${the_destionation_of_deployServer_IP1} and ${the_destionation_of_deployServer_IP2} with ssh , clone the project to builds_dir, then execute you script: npm install && forever restartall. link: gitlab-runner register runners.ssh
GitLab
33,768,537
26
I set up a fresh CentOS 6.6 install and used the Omniubus installer for the CE of Gitlab. When running gitlab-ctl reconfigure I get the following errors: ================================================================================ Recipe Compile Error in /opt/gitlab/embedded/cookbooks/gitlab/recipes/default.rb ================================================================================ RuntimeError ------------ External URL must include a FQDN Cookbook Trace: --------------- /opt/gitlab/embedded/cookbooks/gitlab/libraries/gitlab.rb:95:in `parse_external_url' /opt/gitlab/embedded/cookbooks/gitlab/libraries/gitlab.rb:191:in `generate_config' /opt/gitlab/embedded/cookbooks/gitlab/recipes/default.rb:34:in `from_file' Relevant File Content: ---------------------- /opt/gitlab/embedded/cookbooks/gitlab/libraries/gitlab.rb: 88: 89: def parse_external_url 90: return unless external_url 91: 92: uri = URI(external_url.to_s) 93: 94: unless uri.host 95>> raise "External URL must include a FQDN" 96: end 97: Gitlab['user']['git_user_email'] ||= "gitlab@#{uri.host}" 98: Gitlab['gitlab_rails']['gitlab_host'] = uri.host 99: Gitlab['gitlab_rails']['gitlab_email_from'] ||= "gitlab@#{uri.host}" 100: 101: case uri.scheme 102: when "http" 103: Gitlab['gitlab_rails']['gitlab_https'] = false 104: when "https" The FQDN of the server is correctly set, I have an external IP. DNS is configured for the FQDN to point at my external IP. Here's the contents of my /etc/gitlab/gitlab.rb in case that is useful: # Check and change the external_url to the address your users will type in their browser external_url 'gitlab.thefallenphoenix.net' gitlab_rails['gitlab_email_from'] = '[email protected]'
EDIT: This is now fixed with adding http:// or https:// to the domain in the .rb file. Tested on Debian 9 with Gitlab EE. Add a = sign to the gitlab.rb. It should be: external_url = 'gitlab.thefallenphoenix.net' gitlab_rails['gitlab_email_from'] = '[email protected]' After that it should install fine. At least it worked for me on CentOS 6.6.
GitLab
26,660,084
26
See this huge identations is painful (for me). Is there a way to set tab size to 4 spaces. This picture is taken from local Gitlab CE server with minimal customization. I think tabsize 8 spaces is default.
Go to Settings β†’ Preferences β†’ Behavior and set Tab width.
GitLab
49,402,976
25
I have the following step in my declarative jenkins pipeline: I create script which comes from my resources/ folder using libraryResource. This script contains credentials for my autobuild user and for some admintest user. stage('Build1') { steps { node{ def script = libraryResource 'tests/test.sh' writeFile file: 'script.sh', text: script sh 'chmod +x script.sh' withCredentials([usernamePassword(credentialsId: xxx, usernameVariable: 'AUTOBUILD_USER', passwordVariable: 'AUTOBUILD_PASSWD')]){ sh './script.sh " } } } This works fine. I can use my autobuild user. Now I'm searching for the best way how I can include also the crendentials of my admintest user. Do I have to 'nest' it with a second withCredentials part or can I add again a usernamePassword 'array'?
Sure, you can use one withCredentials block to assign multiple credentials to different variables. withCredentials([ usernamePassword(credentialsId: credsId1, usernameVariable: 'USER1', passwordVariable: 'PASS1'), usernamePassword(credentialsId: credsId2, usernameVariable: 'USER2', passwordVariable: 'PASS2') ]){ //... }
Jenkins
47,475,160
88
I have a Jenkinsfile with some global variables and some stages. can I update the global variable out from a stage? An example: pipeline { agent any environment { PASSWD = "${sh(returnStdout: true, script: 'python -u do_some_something.py')}" ACC = "HI" } stage('stage1') { when { expression { params.UPDATE_JOB == false } } steps{ script { def foo= sh( returnStdout: true, script: 'python -u do_something.py ') env.ACC = foo println foo print("pw") println env.PASSWD } } } } Is it possible to update the ACC variable with the value from foo, so that I can use the ACC Variable in the next stage?
You can't override the environment variable defined in the environment {} block. However, there is one trick you might want to use. You can refer to ACC environment variable in two ways: explicitly by env.ACC implicitly by ACC The value of env.ACC cannot be changed once set inside environment {} block, but ACC behaves in the following way: when the variable ACC is not set then the value of env.ACC gets accessed (if exists of course). But when ACC variable gets initialized in any stage, ACC refers to this newly set value in any stage. Consider the following example: pipeline { agent any environment { FOO = "initial FOO env value" } stages { stage("Stage 1") { steps { script { echo "FOO is '${FOO}'" // prints: FOO is 'initial FOO env value' env.BAR = "bar" } } } stage("Stage 2") { steps { echo "env.BAR is '${BAR}'" // prints: env.BAR is 'bar' echo "FOO is '${FOO}'" // prints: FOO is 'initial FOO env value' echo "env.FOO is '${env.FOO}'" // prints: env.FOO is 'initial FOO env value' script { FOO = "test2" env.BAR = "bar2" } } } stage("Stage 3") { steps { echo "FOO is '${FOO}'" // prints: FOO is 'test2' echo "env.FOO is '${env.FOO}'" // prints: env.FOO is 'initial FOO env value' echo "env.BAR is '${BAR}'" // prints: env.BAR is 'bar2' script { FOO = "test3" } echo "FOO is '${FOO}'" // prints: FOO is 'test3' } } } } And as you can see in the above example, the only exception to the rule is if the environment variable gets initialized outside the environment {} block. For instance, env.BAR in this example was initialized in Stage 1, but the value of env.BAR could be changed in Stage 2 and Stage 3 sees changed value. UPDATE 2019-12-18 There is one way to override the environment variable defined in the environment {} block - you can use withEnv() block that will allow you to override the existing env variable. It won't change the value of the environment defined, but it will override it inside the withEnv() block. Take a look at the following example: pipeline { agent any stages { stage("Test") { environment { FOO = "bar" } steps { script { withEnv(["FOO=newbar"]) { echo "FOO = ${env.FOO}" // prints: FOO = newbar } } } } } } I also encourage you to check my "Jenkins Pipeline Environment Variables explained " video.
Jenkins
53,541,489
87
I have added an SSH credential to Jenkins. Unfortunately, I have forgotten the SSH passphrase and would now like to obtain it from Jenkins' credential archive, which is located at ${JENKINS_HOME}/credentials.xml. That XML document seems to have credentials encrypted in XML tags <passphrase> or <password>. How can I retrieve the plaintext passphrase?
Open your Jenkins' installation's script console by visiting http(s)://${JENKINS_ADDRESS}/script. There, execute the following Groovy script: println( hudson.util.Secret.decrypt("${ENCRYPTED_PASSPHRASE_OR_PASSWORD}") ) where ${ENCRYPTED_PASSPHRASE_OR_PASSWORD} is the encrypted content of the <password> or <passphrase> XML element that you are looking for.
Jenkins
37,683,143
87
I am having issues with sonar picking up the jacoco analysis report. Jenkins however is able to pick up the report and display the results. My project is a maven build, built by Jenkins. The jacoco report is generated by maven (configured in the pom). Sonar is executed by using the Jenkins plugin. This is what I see on SonarQube: This is the report i can see of the project in jenkins. The maven plugin config: <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.6.4.201312101107</version> <executions> <execution> <id>default-prepare-agent</id> <goals> <goal>prepare-agent</goal> </goals> </execution> <execution> <id>default-report</id> <phase>prepare-package</phase> <goals> <goal>report</goal> </goals> </execution> <execution> <id>default-check</id> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> Jenkins Sonar Plugin config
You were missing a few important sonar properties, Here is a sample from one of my builds: sonar.jdbc.dialect=mssql sonar.projectKey=projectname sonar.projectName=Project Name sonar.projectVersion=1.0 sonar.sources=src sonar.language=java sonar.binaries=build/classes sonar.tests=junit sonar.dynamicAnalysis=reuseReports sonar.junit.reportsPath=build/test-reports sonar.java.coveragePlugin=jacoco sonar.jacoco.reportPath=build/test-reports/jacoco.exec The error in Jenkins console output can be pretty useful for getting code coverage to work. Project coverage is set to 0% since there is no directories with classes. Indicates that you have not set the Sonar.Binaries property correctly No information about coverage per test Indicates you have not set the Sonar.Tests property properly Coverage information was not collected. Perhaps you forget to include debug information into compiled classes? Indicates that the sonar.binaries property was set correctly, but those files were not compiled in debug mode, and they need to be
Jenkins
22,174,501
86
I am trying to execute a shell script if either the build pass or fails after post-build in Jenkins. I cannot see this option in post build to execute some shell script except for running a target.
Very easily done with Post build task plugin.
Jenkins
11,160,363
86
You have a project which has got some SW requirements to run (e.g.: a specific version of Apache, a version of PHP, an instance of a MySQL database and a couple of other pieces of software). You have already discovered Vagrant, so your virtual environment is all setup. You can create boxes out of your configuration files and cookbooks. You have also understood the advantages of a Continuous Integration system such as Jenkins. Now you would like to combine these two worlds (Vagrant and Jenkins) to get the perfect Continuous Integration Environment. To be more specific, you would like not to install the SW required by your project on the machine running Jenkins, but you would like to use the virtual environment provided by Vagrant to periodically build your project on the top of it. The CI software (Jenkins) will build the Vagrant box for you and build and test your project on the top of it. How would you setup your environment to achieve this?
it is a good solution for build system, my suggestion: Your current jenkins works as master CI (probably started by user jenkins) Create another user in same machine or another machine to work as jenkins slave mode jenkins slave can be invoked from jenkins master, and it can use different user like vagrant who had permission and environment for vagrant, therefore it will not interfere the original jenkins master server create your base vagrant box, then it can be reused to speedup for your deployment Most of the installation information (packages) could be managed by puppet (or chef) to be loaded into your vm box. Probably you can take a look at veewee, which can create vagrant box on fly. Here is the Make CI easier with Jenkins CI and Vagrant for my guideline for this suggestion.
Jenkins
6,941,547
86
I've been dealing with the problem of scaling CI at my company and at the same time trying to figure out which approach to take when it comes to CI and multiple branches. There is a similar question at stackoverflow, Multiple feature branches and continuous integration. I've started a new one because I'd like to get more of discussion and provide some analysis in the question. So far I've found that there are 2 main approaches that I can take (or maybe some others???). Multiple set of jobs (talking about Jenkins/Hudson here) per branch Write tooling to manage the extra jobs Create/modify/delete Jobs in bulk Custom settings for each job per branch (SCM url, dep management repos duplications) Some examples of people tackling this problem with shell tools, ant scripts and Jenkins CLI. See: http://jenkins.361315.n4.nabble.com/Multiple-branches-best-practice-td2306578.html http://jenkins.361315.n4.nabble.com/Is-it-possible-to-handle-multiple-branches-where-some-jobs-should-run-on-each-one-without-duplicatin-td954729.html http://jenkins.361315.n4.nabble.com/Parallel-development-with-branches-td1013013.html Configure or Create hudson job automatically Will cause more load on your CI cluster Feedback cycle for devs slows down (if the infrastructure cannot handle the new load) Multiple set of jobs per 2 branches (dev & stable) Manage the two sets manually (if you change the conf of a job then be sure to change in the other branch) PITA but at least so few to manage Other extra branches won't get a full test suite before they get pushed to dev Unsatisfied devs. Why should a dev care about CI scaling problems. He has a simple request, when I branch I would like to test my code. Simple. So it seems if I want to provide devs with CI for their own custom branches I need special tooling for Jenkins (API or shellscripts or something?) and handle scaling. Or I can tell them to merge more often to DEV and live without CI on custom branches. Which one would you take or are there other options?
When you talk about scaling CI you're really talking about scaling the use of your CI server to handle all your feature branches along with your mainline. Initially this looks like a good approach as the developers in a branch get all the advantages of the automated testing that the CI jobs include. However, you run into problems managing the CI server jobs (like you have discovered) and more importantly, you aren't really doing CI. Yes, you are using a CI server, but you aren't continuously integrating the code from all of your developers. Performing real CI means that all of your developers are committing regularly to the mainline. Easy to say, but the hard part is doing it without breaking your application. I highly recommend you look at Continuous Delivery, especially the Keeping Your Application Releasable section in Chapter 13: Managing Components and Dependencies. The main points are: Hide new functionality until it's finished (A.K.A Feature Toggles). Make all changes incrementally as a series of small changes, each of which is releasable. Use branch by abstraction to make large-scale changes to the codebase. Use components to decouple parts of your application that change at different rates. They are pretty self explanatory except branch by abstraction. This is just a fancy term for: Create an abstraction over the part of the system that you need to change. Refactor the rest of the system to use the abstraction layer. Create a new implementation, which is not part of the production code path until complete. Update your abstraction layer to delegate to your new implementation. Remove the old implementation. Remove the abstraction layer if it is no longer appropriate. The following paragraph from the Branches, Streams, and Continuous Integration section in Chapter 14: Advanced Version Control summarises the impacts. The incremental approach certainly requires more discipline and care - and indeed more creativity - than creating a branch and diving gung-ho into re-architecting and developing new functionality. But it significantly reduces the risk of your changes breaking the application, and will save your and your team a great deal of time merging, fixing breakages, and getting your application into a deployable state. It takes quite a mind shift to give up feature branches and you will always get resistance. In my experience this resistance is based on developers not feeling safe committing code the the mainline and this is a reasonable concern. This in turn usually stems from a lack of knowledge, confidence or experience with the techniques listed above and possibly with the lack of confidence with your automated tests. The former can be solved with training and developer support. The latter is a far more difficult problem to deal with, however branching doesn't provide any extra real safety, it just defers the problem until the developers feel confident enough with their code.
Jenkins
5,611,365
86
I am building a workflow with Gitlab, Jenkins and - probably - Nexus (I need an artifact storage). I would like to have GitLab to store releases/binaries - is it possible in a convenient way? I would not like to have another service from which a release (and documentation) could be downloaded but to have it somehow integrated with repository manager, just like releases are handled in e.g. GitHub. Any clues?
Update Oct. 2020: GitLab 13.5 now offers: Attach binary assets to Releases If you aren’t currently using GitLab for your releases because you can’t attach binaries to releases, your workflow just got a lot simpler. You now have the ability to attach binaries to a release tag from the gitlab.ci-yml. This extends support of Release Assets to include binaries, rather than just asset links or source code. This makes it even easier for your development teams to adopt GitLab and use it to automate your release process. See Documentation and Issue. Update Nov. 2015: GitLab 8.2 now supports releases. With its API, you now can create and update a relase associated to a tag. For now, it is only the ability to add release notes (markdown text and attachments) to git tags (aka Releases). first upload the release binary create a new release and place a link to the uploaded binary in the description Update May 2019: GitLab 1.11 introduces an interesting "Guest access to Releases": It is now possible for guest users of your projects to view releases that you have published on the Releases page. They will be able to download your published artifacts, but are not allowed to download the source code nor see repository information such as tags and commits. Original answer March 2015 This is in progress, and suggested in suggestions 4156755: We’re accepting merge requests for the minimal proposal by Ciro: For each repository tag under https://github.com/cirosantilli/test/releases/tag/3.0, allow to upload and download a list of files. 2. The upload and download can be done directly from the tag list view. 3. If a tag gets removed, the uploads are destroyed. (we’re not accepting the tag message edit mentioned in recent comments) The comments to that suggestion include: What makes it more interesting is the next step. I would really like a way to let public download artifacts from "releases" without being able to access source code (i.e. make sources private for project team only except anything else like wiki, "releases", issue tracker). However, such additional feature looks more generic and I submitted a separate feature request for that. Nevertheless, I repeat my point here: While the simplistic version of "releases" is still nice to have, many people can easily set up external file server and point URLs in release/tag description to this server outside GitLab. In other words, "releases" may not look attractive now without some future picture of integration.
Jenkins
29,013,457
85
I need to launch a dynamic set of tests in a declarative pipeline. For better visualization purposes, I'd like to create a stage for each test. Is there a way to do so? The only way to create a stage I know is: stage('foo') { ... } I've seen this example, but I it does not use declarative syntax.
Use the scripted syntax that allows more flexibility than the declarative syntax, even though the declarative is more documented and recommended. For example stages can be created in a loop: def tests = params.Tests.split(',') for (int i = 0; i < tests.length; i++) { stage("Test ${tests[i]}") { sh '....' } }
Jenkins
42,837,066
85
I don't know Jenkins at all. I want to install Jenkins on Windows 10. I downloaded the installer and ran it, but I have a problem. I don't know what to enter in the "Account" and "Password" fields on the "Service Logon Credentials" stage. if I use the username and password of my Windows account(with administrator privileges) the following information is displayed:
When installing a service to run under a domain user account, the account must have the right to logon as a service. This logon permission applies strictly to the local computer and must be granted in the Local Security Policy. Perform the following to edit the Local Security Policy of the computer you want to define the β€˜logon as a service’ permission: Logon to the computer with administrative privileges. Open the Administrative Tools and open the Local Security Policy If the Local Security Policy is missing in your system, refer to the answer in the Where to download GPEdit.msc for Windows 10 Home? question on Microsoft Community to troubleshoot Expand Local Policy [Note: it's ... Policies on Win Server] and click on User Rights Assignment In the right pane, right-click Log on as a service and select properties. Click on the Add User or Group… button to add the new user. In the Select Users or Groups dialogue, find the user you wish to enter and click OK Click OK in the Log on as a service Properties to save changes. Then try again with the added user. – Jenkins User Handbook > Windows > Invalid service logon credentials
Jenkins
63,410,442
84
I'm following guideline how to sign Android apk with Jenkins. I have parametrized Jenkins job with KSTOREPWD and KEYPWD. A part of Jenkins' job configuration (Build->Execute shell) is to take those parameters and store them as environment variables: export KSTOREPWD=${KSTOREPWD} export KEYPWD=${KEYPWD} ... ./gradlew assembleRelease The problem is when the build is over anybody can access the build "Console Output" and see what passwords were entered; part of that output: 08:06:57 + export KSTOREPWD=secretStorePwd 08:06:57 + KSTOREPWD=secretStorePwd 08:06:57 + export KEYPWD=secretPwd 08:06:57 + KEYPWD=secretPwd So I'd like to suppress echo before output from export commands and re-enable echo after export commands.
By default, Jenkins launches Execute Shell script with set -x. This causes all commands to be echoed You can type set +x before any command to temporary override that behavior. Of course you will need set -x to start showing them again. You can override this behaviour for the whole script by putting the following at the top of the build step: #!/bin/bash +x
Jenkins
26,797,219
84
A windows slave node connected to Jenkins server through "Java web start". The system information of the node doesn't have it's IP address. I had to run through all the slaves node we had, and find which machine (ip address) corresponds to the slave node in Jenkins. Is there a way to find the IP address of a slave node from Jenkins itself?
Through the Script Console (Manage Jenkins -> Nodes -> Select a node -> Script Console) of the node we can execute groovy script. Run the following command to get the IP address. println InetAddress.localHost.canonicalHostName
Jenkins
14,930,329
84
With the release of Xcode 8, Apple introduced a new way of managing the signing configuration. Now you have two options Manual and Automatic. According to the WWDC 2016 Session about Code signing (WWDC 2016 - 401 - What's new in Xcode app signing), when you select Automatic signing, Xcode is going to: Create signing certificates Create and update App IDs Create and update provisioning profiles But according to what Apple says in that session, the Automatic Signing is going to use Development signing and will be limited to Xcode-created provisioning profiles. The issue comes when you try to use Automatic Signing on a CI environment (like Travis CI or Jenkins). I'm not able to figure out an easy way to keep using Automatic and sign for Distribution (as Xcode forces you to use Development and Xcode-created provisioning profiles). The new "Xcode-created provisioning profiles" do not show up in the developer portal, although I can find then in my machine... should I move those profiles to the CI machine, build for Development and export for Distribution? Is there a way to override the Automatic Signing using xcodebuild?
I basically run into the same issue using Jenkins CI and the Xcode Plugin. I ended up doing the build and codesigning stuff myself using xcodebuild. 0. Prerequisites In order to get the following steps done successfully, you need to have installed the necessary provisioning profiles and certificates. That means your code signing should already be working in general. 1. Building an .xcarchive xcodebuild -project <path/to/project.xcproj> -scheme <scheme-name> -configuration <config-name> clean archive -archivePath <output-path> DEVELOPMENT_TEAM=<dev-team-id> DEVELOPMENT_TEAM: your 10 digit developer team id (something like A1B2C3D4E5) 2. Exporting to .ipa xcodebuild -exportArchive -archivePath <path/to/your.xcarchive> -exportOptionsPlist <path/to/exportOptions.plist> -exportPath <output-path> Example of an exportOptions.plist: <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>method</key> <string>development</string> <key>teamID</key> <string> A1B2C3D4E5 </string> </dict> </plist> method: is one of development, app-store, ad-hoc, enterprise teamID: your 10 digit developer team id (something like A1B2C3D4E5) This process is anyway closer to what you would do with Xcode manually, than what for example the Jenkins Xcode Plugin does. Note: The .xcarchive file will always be develpment signed, but selecting "app-store" as method in the 2nd step will do the correct distribution signing and also include the distribution profile as "embedded.mobileprovision". Hope this helps.
Jenkins
39,500,634
83
I have a job called "development" and another project called "code analysis". At the moment we have two different jobs and different workspaces, but same code; is there any way we could use the same workspace for multiple jobs?I checked the plugins available in Jenkins but I haven't found any suitable one.
Suppose your "development" Jenkins job workspace is /var/workspace/job1. In the "code analysis" job configuration page, under the tab General click on Advanced... and select the option Use custom workspace and give the same workspace /var/workspace/job1 as of your "development" job.
Jenkins
21,520,475
81
A step in my pipeline uploads a .tar to an artifactory server. I am getting a Bad substitution error when passing in env.BUILD_NUMBER, but the same commands works when the number is hard coded. The script is written in groovy through jenkins and is running in the jenkins workspace. sh 'curl -v --user user:password --data-binary ${buildDir}package${env.BUILD_NUMBER}.tar -X PUT "http://artifactory.mydomain.com/artifactory/release-packages/package${env.BUILD_NUMBER}.tar"' returns the errors: [Pipeline] sh [Package_Deploy_Pipeline] Running shell script /var/lib/jenkins/workspace/Package_Deploy_Pipeline@tmp/durable-4c8b7958/script.sh: 2: /var/lib/jenkins/workspace/Package_Deploy_Pipeline@tmp/durable-4c8b7958/script.sh: Bad substitution [Pipeline] } //node [Pipeline] Allocate node : End [Pipeline] End of Pipeline ERROR: script returned exit code 2 If hard code in a build number and swap out ${env.BUILD_NUMBER} I get no errors and the code runs successfully. sh 'curl -v --user user:password --data-binary ${buildDir}package113.tar -X PUT "http://artifactory.mydomain.com/artifactory/release-packages/package113.tar"' I use ${env.BUILD_NUMBER} within other sh commands within the same script and have no issues in any other places.
This turned out to be a syntax issue. Wrapping the command in ''s caused ${env.BUILD_NUMBER to be passed instead of its value. I wrapped the whole command in "s and escaped the nested. Works fine now. sh "curl -v --user user:password --data-binary ${buildDir}package${env.BUILD_NUMBER}.tar -X PUT \"http://artifactory.mydomain.com/artifactory/release-packages/package${env.BUILD_NUMBER}.tar\""
Jenkins
37,219,348
80
What plugins and plugin features do I need to set in order to get my Jenkins job to trigger a build any time code is committed to an SVN project? I have installed both the standard SVN plugin as well as the SVN tagging plugin, but I do not see any new features that allow trigger configuration.
There are two ways to go about this: I recommend the first option initially, due to its ease of implementation. Once you mature in your build processes, switch over to the second. Poll the repository to see if changes occurred. This might "skip" a commit if two commits come in within the same polling interval. Description of how to do so here, note the fourth screenshot where you configure on the job a "build trigger" based on polling the repository (with a crontab-like configuration). Configure your repository to have a post-commit hook which notifies Jenkins that a build needs to start. Description of the plugin here, in the section "post-commit hooks" The SVN Tag feature is not part of the polling, it is part of promoting the current "head" of the source code to a tag, to snapshot a build. This allows you to refer to Jenkins buid #32 as SVN tag /tags/build-32 (or something similar).
Jenkins
10,014,252
80
How do I schedule a Jenkins build such that it would be able to build only at specific hours every day? For example to start at 4 PM 0 16 1-7 * * I understand that as, "at 0 minutes, at 4 o'clock PM, from Monday to Sunday, every month", however it builds every minute :( I would be grateful for any advice. Thanks!
Update: please read the other answers and comments as they contain more info (e.g., hash functions) that I did not know when I first answered this question. According to Jenkins' own help (the "?" button) for the schedule task, 5 fields are specified: This field follows the syntax of cron (with minor differences). Specifically, each line consists of 5 fields separated by TAB or whitespace: MINUTE HOUR DOM MONTH DOW I just tried to get a job to launch at 4:42PM (my approximate local time) and it worked with the following, though it took about 30 extra seconds: 42 16 * * * If you want multiple times, I think the following should work: 0 16,18,20,22 * * * for 4, 6, 8, and 10 o'clock PM every day.
Jenkins
7,000,251
80
How can I specify something like the following in my Jenkinsfile? when branch not x I know how to specify branch specific tasks like: stage('Master Branch Tasks') { when { branch "master" } steps { sh '''#!/bin/bash -l Do some stuff here ''' } } However I'd like to specify a stage for when branch is not master or staging like the following: stage('Example') { if (env.BRANCH_NAME != 'master' && env.BRANCH_NAME != 'staging') { echo 'This is not master or staging' } else { echo 'things and stuff' } } However the above does not work and fails with the following errors: WorkflowScript: 62: Not a valid stage section definition: "if WorkflowScript: 62: Nothing to execute within stage "Example" Note source for my failed try: https://jenkins.io/doc/book/pipeline/syntax/#flow-control
With this issue resolved, you can now do this: stage('Example (Not master)') { when { not { branch 'master' } } steps { sh 'do-non-master.sh' } }
Jenkins
43,578,528
79
I'm trying to use Jenkins (Global) environment variables in my xcopy script. ${WORKSPACE} doesn't work "${WORKSPACE}" doesn't work '${WORKSPACE}' doesn't work
I know nothing about Jenkins, but it looks like you are trying to access environment variables using some form of unix syntax - that won't work. If the name of the variable is WORKSPACE, then the value is expanded in Windows batch using %WORKSPACE%. That form of expansion is performed at parse time. For example, this will print to screen the value of WORKSPACE echo %WORKSPACE% If you need the value at execution time, then you need to use delayed expansion !WORKSPACE!. Delayed expansion is not normally enabled by default. Use SETLOCAL EnableDelayedExpansion to enable it. Delayed expansion is often needed because blocks of code within parentheses and/or multiple commands concatenated by &, &&, or || are parsed all at once, so a value assigned within the block cannot be read later within the same block unless you use delayed expansion. setlocal enableDelayedExpansion set WORKSPACE=BEFORE ( set WORKSPACE=AFTER echo Normal Expansion = %WORKSPACE% echo Delayed Expansion = !WORKSPACE! ) The output of the above is Normal Expansion = BEFORE Delayed Expansion = AFTER Use HELP SET or SET /? from the command line to get more information about Windows environment variables and the various expansion options. For example, it explains how to do search/replace and substring operations.
Jenkins
8,606,664
78
I'm currently using jenkins/hudson for continuous integration a large mostly C++ project. We have separate projects for trunk and every branch. Also, there are some related projects for the Java code, but the setup for those are fairly basic right now (we may do more later though). The C++ projects do the following: Builds everything with options for whether to reconfigure, do a clean build, or use a fresh checkout Optionally builds and runs all tests Optionally runs all tests using Valgrind's memcheck Runs cppcheck Generates doxygen documentation Publishes reports: unit tests, valgrind, cppcheck, compiler warnings, SLOC, open tasks, and code coverage (using gcov, gcovr, and the cobertura plugin) Deploys code nightly or on demand to a test environment and a package repository Everything is configurable for automatic builds and optional for on demand builds. Underneath, there's a bash script that controls much of this, which farther depends on our build system, which uses automake and autoconf along with custom bash scripts. We started using Hudson (at the time) because that's what the Java guys were using and we just wanted nightly builds. Since then, we've added a lot more and continue to add more. In some ways Hudson is great, but certainly isn't ideal. I've looked at other solutions and the only one that looks like it could be a replacement is buildbot. Would buildbot be better for this situation? Is the investment worth it since we're already using Hudson? Why? EDIT: Someone asked why I haven't found Hudson/Jenkins to be ideal. The short answer is that everything can be improved. I'm simply wondering if Jenkins is the best current solution for my use case or whether there is something better (buildbot?) that would be easier to maintain in the long run even as new requirements come up.
Both are open source projects, but you do not need to change buildbot code to "extend" it, it is actually quite easy to import your own packages in its configuration in which you can sub-class most of the features with your own additions. Examples: your own compilation or test code, some parsing of outputs/errors to be given to the next steps, your own formating of alert emails etc. there are lots of possibilities. Generally I would say that buildbot is the most "general purpose" automatic builds tools. Jenkins however might be the best related to running tests, especially for parsing and presenting results in nice ways (results, details, charts.. some clicks away), things that buildbot does not do "out-of-the-box". I'm actually thinking of using both to have sexier test result pages.. :-) Also as a rule of thumb it should not be difficult to create a new tool's config: if the specification of what to do (configs, builds, tests) is too hard to switch from one tool to another, it is a (bad) sign that not enough configuration scripts are moved to the sources. Buildbot (or Jenkins) should only call simple commands. If it is simple to run tests, then developers will do it as well and this will improve the success rate, whereas if only the continuous integration system runs the tests, you will be running after it to fix the new code failures, and will loose its non-regression value, just my 0.02€ :-) Hope it'll help.
Jenkins
5,653,372
78
What is the maximum number of jobs I can run concurrently in Jenkins?
The maximum number of Jenkins jobs is dependent upon what you set as the limits in the master and slaves. Usually, we limit by the number of cores, but your mileage may vary depending upon available memory, disk speed, availability of SSD, and overlap of source code. For the master, this is set in Manage Jenkins > Configure System > # of executors For the slaves (nodes), it is set in Manage Jenkins > Nodes > (each node) > Configure > # of executors
Jenkins
9,626,899
77
I'm using a EC2 server instance. Used the following to install Jenkins: wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add - sudo sh -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list' sudo apt-get update sudo apt-get install jenkins but I need to install software on the Jenkins server so in my EC2 instance I did sudo –s –H –u jenkins to get into the jenkins server. Then I tried to do sudo cabal install quickcheck but it prompted me for jenkins password. I've been searching around the internet for 4hrs now and nothing is helping me get administrative privilege in the jenkins server. So I'm building my project using the following command in shell: sudo cabal clean sudo cabal configure sudo cabal build sudo cabal install This is the error I'm getting: Started by timer Building in workspace /var/lib/jenkins/jobs/Finance/workspace Checkout:workspace / /var/lib/jenkins/jobs/Finance/workspace - hudson.remoting.LocalChannel@eea6dc Using strategy: Default Last Built Revision: Revision b638e2182dece0ef1a40232b1d75fa3ae5c01a5d (origin/master) Fetching changes from 1 remote Git repository Fetching upstream changes from origin Commencing build of Revision b638e2182dece0ef1a40232b1d75fa3ae5c01a5d (origin/master) Checking out Revision b638e2182dece0ef1a40232b1d75fa3ae5c01a5d (origin/master) [workspace] $ /bin/sh -xe /tmp/hudson3500373817395137440.sh + sudo cabal clean sudo: no tty present and no askpass program specified Sorry, try again. sudo: no tty present and no askpass program specified Sorry, try again. sudo: no tty present and no askpass program specified Sorry, try again. sudo: 3 incorrect password attempts Build step 'Execute shell' marked build as failure Sending e-mails to: ***@gmail.com ERROR: Could not connect to SMTP host: localhost, port: 25 javax.mail.MessagingException: Could not connect to SMTP host: localhost, port: 25; nested exception is: java.net.ConnectException: Connection refused at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:1934) at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:638) at javax.mail.Service.connect(Service.java:295) at javax.mail.Service.connect(Service.java:176) at javax.mail.Service.connect(Service.java:125) at javax.mail.Transport.send0(Transport.java:194) at javax.mail.Transport.send(Transport.java:124) at hudson.tasks.MailSender.execute(MailSender.java:116) at hudson.tasks.Mailer.perform(Mailer.java:117) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:814) at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:786) at hudson.model.Build$BuildExecution.post2(Build.java:183) at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:733) at hudson.model.Run.execute(Run.java:1592) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:237) Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at com.sun.mail.util.SocketFetcher.createSocket(SocketFetcher.java:286) at com.sun.mail.util.SocketFetcher.getSocket(SocketFetcher.java:231) at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:1900) ... 17 more Finished: FAILURE
Here is how you can fix it: Stop Jenkins Go go edit /var/lib/jenkins/config.xml Change <useSecurity>true</useSecurity> to false Restart Jenkins: sudo service jenkins restart Navigate to the Jenkins dashboard to the "Configure Security" option you likely used before. This time, setup security the same as before, BUT set it to allow anyone to do anything, and allow user signup. Go to www.yoursite.com/securityRealm/addUser and create a user Then go change allow anyone to do anything to whatever you actually want users to be able to do. In my case, it is allow logged in users to do anything.
Jenkins
15,227,305
76
I am looking at limiting the number of concurrent builds to a specific number in Jenkins, leveraging the multibranch pipeline workflow but haven't found any good way to do this in the docs or google. Some docs say this can be accomplished using concurrency in the stage step of a Jenkinsfile but I've also read elsewhere that that is a deprecated way of doing it. It looks like there was something released fairly recently for limiting concurrency via Job Properties but I couldn't find documentation for it and I'm having trouble following the code. The only thing I found a PR that shows the following: properties([concurrentBuilds(false)]) But I am having trouble getting it working. Does anybody know or have a good example of how to limit the number of concurrent builds for a given, multibranch project? Maybe a Jenkinsfile snippet that shows how to limit or cap the number of multibranch concurrent builds?
Found what I was looking for. You can limit the concurrent builds using the following block in your Jenkinsfile. node { // This limits build concurrency to 1 per branch properties([disableConcurrentBuilds()]) //do stuff ... } The same can be achieved with a declarative syntax: pipeline { options { disableConcurrentBuilds() } }
Jenkins
41,492,688
76
For Jenkins using a Groovy System Script, is there a way to easily search the build queue and list of executing builds for some criteria (specifically a parameter that matches some condition) and then kill/cancel them? I cannot seem to find any way to do this, but it seems like it should be possible.
I haven't tested it myself, but looking at the API it should be possible in the following way: import hudson.model.* import jenkins.model.Jenkins def q = Jenkins.instance.queue q.items.findAll { it.task.name.startsWith('my') }.each { q.cancel(it.task) } Relevant API links: http://javadoc.jenkins-ci.org/jenkins/model/Jenkins.html http://javadoc.jenkins-ci.org/hudson/model/Queue.html
Jenkins
12,305,244
76
With Jenkins 2 Pipeline plugin, there's a useful feature allowing a quick overview of the pipeline stages and status of steps, including logging output. However, if you use the "Shell script" (sh) step, there doesn't seem to be a way to label that script with a useful name, so the display merely shows a long list of "Shell Script" (shown in the image below). How can I assign a useful name, or how can I use some other step to accomplish the same effect?
Update Feb 2019: According to gertvdijk's answer below, it is now possible to assign an optional label to the sh step, starting from v2.28, and for those who can't upgrade yet, there's also a workaround. Please check his answer for details and comments! Previous version (hover to see it): As far as I know, that's currently not possible. In the Jenkins tracker, there is a Name or alias Shell Script Step (sh) issue which is similar to your situation: The sh step adds a "Shell Script" step in the Pipeline. However, there could be multiple such steps including steps from various plugins (e.g., Docker), which makes it hard to distinguish the steps. We should perhaps add an optional parameter to sh to specify a name or alias which would then appear in the pipeline steps. e.g., the following can be the step for npm which would show as "Shell script: npm" in the pipeline view. sh cmd:"npm install", name: "npm" However, it was closed as a duplicate of the older Allow stage to operate as a labelled block which has been fixed recently and seems to be included in v2.2 of the pipeline-stage-step-plugin (see changelog). It seems that stages can now be nested and they will appear in the view table, but I don't think it's what you're looking for.
Jenkins
39,414,921
75
Recently, in our company, we decided to use Ansible for deployment and continuous integration. But when I started using Ansible I didn't find modules for building Java projects with Maven, or modules for running JUnit tests, or JMeter tests. So, I'm in a doubtful state: it may be I'm using Ansible in a wrong way. When I looked at Jenkins, it can do things like build, run tests, deploy. The missing thing in Hudson is creating/deleting an instance in cloud environments like AWS. So, in general, for what purposes do we need to use Ansible/Jenkins? For CI do I need to use a combination of Ansible and Jenkins? Please throw some light on correct usage of Ansible.
First, Jenkins and Hudson are basically the same project. I'll refer to it as Jenkins below. See How to choose between Hudson and Jenkins?, Hudson vs Jenkins in 2012, and What is the most notable difference between Jenkins and Hudson from a user perpective? for more. Second, Ansible isn't meant to be a continuous integration engine. It (generally) doesn't poll git repos and run builds that fail in a sane way. When can I simply use Jenkins? If your machine environment and deployment process is very straightforward (such as Heroku or iron that is configured outside of your team), Jenkins may be enough. You can write a custom script that does a deploy as the final build step (or a chained step). When can I simply use Ansible? If you only need to "deploy" without needing to build/test, Ansible might be enough. For instance, you can run a deploy from the commandline or using Ansible Tower. This is great for small projects, static sites, etc. How do they work together? A good combination is to use Jenkins to build, test, and save artifacts. Add a step to call Ansible or Ansible Tower to handle the actual deployment process. That allows Ansible to handle machine configuration and lets Jenkins handle the CI process. What are the alternatives to Jenkins? I strongly recommend Thoughtworks Go (not to be confused with Go the language) instead of Jenkins. Others include CruiseControl, TravisCI, and Integrity.
Jenkins
25,842,718
75
What shell is used in Jenkins when calling the shell command? I'm running Jenkins on a Linux machine.
From the help/question mark icon of the "Execute shell" section: Runs a shell script (defaults to sh, but this is configurable) for building the project. If you go to Manage Jenkins --> Configure System you will find an option (called "Shell executable") to set the name or absolute path to the shell that you want your shell scripts to use... For my system without configuring this option... it uses bash!
Jenkins
12,455,932
75
I have a parameterized Jenkins job which requires the input of a specific Git branch in a specific Git repo. Currently this parameter is a string parameter. Is there any way to make this parameter a choice parameter and dynamically fill the drop down list with the Git branches? I don't want to require someone to maintain this choice parameter by manually configuring the drop down every time a new branch is created.
I tried a couple of answers mentioned in this link, but couldn't figure out how to tell Jenkins about the user-selected branch. As mentioned in my previous comment in above thread, I had left the branch selector field empty. But, during further investigations, I found another way to do the same thing - https://wiki.jenkins-ci.org/display/JENKINS/Git+Parameter+Plugin I found this method was a lot simpler, and had less things to configure! Here's what I configured - Installed the git parameter plugin Checked the 'This build is parameterized' and added a 'Git parameter' Added the following values: Then in the git SCM section of the job I added the same value mentioned in the 'Name' section, as if it were an environment variable. (If you read the help for this git parameter plugin carefully, you will realize this) After this I just ran the build, chose my branch(Jenkins checks out this branch before building) and it completed the build successfully, AND by choosing the branch that I had specified.
Jenkins
10,433,105
74
I have been trying to follow the instructions on how to change the default view in Jenkins here. I've created another view that I would like to be the default, but when I go looking for the Default View setting in Manage Jenkins -> Configure System it doesn't seem to be there. Is there something I have to do to make it show up? Or is it tucked away somewhere else? If someone has it working can they indicate where about in the config screen (immediately before/after something else) so I can double check. I am using Jenkins 1.447
from comment> When I go to Manage Jenkins -> Configure System and Default View, all our "public" views are listed there in the drop down. Make sure the view you created isn't just in "My Views" for your user, and is open to everyone.
Jenkins
8,822,200
74
In Jenkins, is there a way to restrict certain jobs so that only specific users can view them? Jenkins allows the restriction of user-abilities-per-project via the "Project-based Matrix Authorization Strategy". The problem is that a user can not access anything without the 'Overall' 'Read' setting. This seems to allow them to view all jobs. Is there another plugin that would allow job visibility restrictions?
Think this is, what you are searching for: Allow access to specific projects for Users Short description without screenshots: Use Jenkins "Project-based Matrix Authorization Strategy" under "Manage Jenkins" => "Configure System". On the configuration page of each project, you now have "Enable project-based security". Now add each user you want to authorize.
Jenkins
8,323,129
74
Is there a one-command way to get an up-to-date mirror of a remote repo? That is if local repo not there yet: clone if it's there: pull I know I could script this around (e.g if [ -d repo ]; then (cd repo && git pull); else git clone $repourl;fi ) , but I need the simplest possible cross-platform way (actually used for Jenkins-CI, which I know does this by default, however I need 2 repos for which support is limited). Git has similar shortcuts for other things (eg. checkout -b, and pull itself), so I'm wondering if I missed something. Thanks!
There is not, given that the commands which operate on existing repos all assume that they're being run inside a given repo. That said, if you're running in a shell, you could simply make use of the shell built-ins. For instance, here's bash: if cd repo; then git pull; else git clone https://server/repo repo; fi This checks to see if repo is a valid directory, and if so, does a pull within it; otherwise it does a clone to create the directory.
Jenkins
15,602,059
73
I have a large repository in Git. How do I create a job in Jenkins that checks out just one sub-folder from the project?
Jenkins Git Plugin support sparse checkouts since git-plugin 2.1.0 (April, 2014). You will need git >= 1.7.0 for this feature. It is under "Additional Behaviors" -> "Sparse Checkout paths." See: Jira issue JENKINS-21809
Jenkins
10,791,472
73
I can find the current git branch name by doing either of these: git branch | awk '/^\*/ { print $2 }' git describe --contains --all HEAD But when in a detached HEAD state, such as in the post build phase in a Jenkins maven build (or in a Travis git fetch), these commands doesn't work. My current working solution is this: git show-ref | grep $(git log --pretty=%h -1) | sed 's|.*/\(.*\)|\1|' | sort -u | grep -v HEAD It displays any branch name that has the last commit on its HEAD tip. This works fine, but I feel that someone with stronger git-fu might have a prettier solution?
A more porcelain way: git log -n 1 --pretty=%d HEAD # or equivalently: git show -s --pretty=%d HEAD The refs will be listed in the format (HEAD, master) - you'll have to parse it a little bit if you intend to use this in scripts rather than for human consumption. You could also implement it yourself a little more cleanly: git for-each-ref --format='%(objectname) %(refname:short)' refs/heads | awk "/^$(git rev-parse HEAD)/ {print \$2}" with the benefit of getting the candidate refs on separate lines, with no extra characters.
Jenkins
6,059,336
73
I have windows 10 and I want to execute the sh command in the Jenkinsfile from Jenkins pipeline using bash for Ubuntu for windows, but it doesn't work I have the following stage in my Jenkins pipeline : stage('sh how to') { steps { sh 'ls -l' } } The error message is : [C:\Program Files (x86)\Jenkins\workspace\pipelineascode] Running shell script Cannot run program "nohup" (in directory "C:\Program Files (x86)\Jenkins\workspace\pipelineascode"): CreateProcess error=2, Le fichier spΓ©cifiΓ© est introuvable I tried changing Jenkins parameter->shell executable with C:\Windows\System32\bash.exe but same error... how to run sh script using windows 10's bash?
From a very quick search, it looks like your error is related to the following issue : JENKINS-33708 The main cause looks like the sh step is not supported on the Windows. You may use bat or install Cygwin for instance. Nevertheless two solutions were proposed in the previous link, suggesting you to do the following steps : Install git-bash Ensure the Git\bin folder (i.e.: C:\Program Files\Git\bin) is in the global search path, in order for Jenkins to find sh.exe Make nohup available for Jenkins, doing the following in git-bash (adapt your paths accordingly) : mklink "C:\Program Files\Git\bin\nohup.exe" "C:\Program Files\git\usr\bin\nohup.exe" mklink "C:\Program Files\Git\bin\msys-2.0.dll" "C:\Program Files\git\usr\bin\msys-2.0.dll" mklink "C:\Program Files\Git\bin\msys-iconv-2.dll" "C:\Program Files\git\usr\bin\msys-iconv-2.dll" mklink "C:\Program Files\Git\bin\msys-intl-8.dll" "C:\Program Files\git\usr\bin\msys-intl-8.dll" Depending on your installation you may have to use these paths : mklink "C:\Program Files\Git\cmd\nohup.exe" "C:\Program Files\git\usr\bin\nohup.exe" mklink "C:\Program Files\Git\cmd\msys-2.0.dll" "C:\Program Files\git\usr\bin\msys-2.0.dll" mklink "C:\Program Files\Git\cmd\msys-iconv-2.dll" "C:\Program Files\git\usr\bin\msys-iconv-2.dll" mklink "C:\Program Files\Git\cmd\msys-intl-8.dll" "C:\Program Files\git\usr\bin\msys-intl-8.dll"
Jenkins
45,140,614
72
I am new in using maven and jenkins. I am trying to inherit the dependencies from parent pom to child pom it shows the following errors: [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/XMLConverters.java:[10,26] package com.rpmtec.current does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/XMLConverters.java:[11,26] package com.rpmtec.current does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/XMLConverters.java:[15,38] cannot find symbol symbol: class AbstractRequestMessageData_Type location: class com.td.inv.wss.util.XMLConverters [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/XMLConverters.java:[26,23] cannot find symbol symbol: class AbstractResponseMessageData_Type location: class com.td.inv.wss.util.XMLConverters [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/UsTermRateItemComparator.java:[5,42] package com.rpmtec.current.UsTermRate_Type does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/UsTermRateItemComparator.java:[7,61] cannot find symbol symbol: class UsTermRateItems [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/UsTermRateItemComparator.java:[9,28] cannot find symbol symbol: class UsTermRateItems location: class com.td.inv.wss.util.UsTermRateItemComparator [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/UsTermRateItemComparator.java:[9,48] cannot find symbol symbol: class UsTermRateItems location: class com.td.inv.wss.util.UsTermRateItemComparator [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/model/COIRQ.java:[9,40] package com.fasterxml.jackson.annotation does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/model/COIRQ.java:[10,26] package com.rpmtec.current does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/model/COIRQ.java:[11,26] package com.rpmtec.current does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/model/COIRQ.java:[12,26] package com.rpmtec.current does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/model/COIRQ.java:[13,26] package com.rpmtec.current does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/model/COIRQ.java:[14,42] package com.rpmtec.current.UsTermRate_Type does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/model/COIRQ.java:[19,2] cannot find symbol symbol: class JsonIgnoreProperties [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/model/COIRQ.java:[69,22] cannot find symbol symbol: class ORCA_GETTERMHOLDINGRS_Type location: class com.td.inv.model.COIRQ [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/model/COIRQ.java:[69,66] cannot find symbol symbol: class RPM_GETPLANACCOUNTOVERVIEWRS_Type location: class com.td.inv.model.COIRQ [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/model/COIRQ.java:[70,25] cannot find symbol symbol: class ORCA_GETTERMINSTRUCTIONRS_Type location: class com.td.inv.model.COIRQ [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/RPMInvoker.java:[5,26] package javax.ws.rs.client does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/RPMInvoker.java:[6,26] package javax.ws.rs.client does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/RPMInvoker.java:[7,26] package javax.ws.rs.client does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/RPMInvoker.java:[8,26] package javax.ws.rs.client does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/RPMInvoker.java:[9,24] package javax.ws.rs.core does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/RPMInvoker.java:[15,26] package com.rpmtec.current does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/RPMInvoker.java:[16,26] package com.rpmtec.current does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/RPMInvoker.java:[23,57] cannot find symbol symbol: class AbstractRequestMessageData_Type location: class com.td.inv.wss.util.RPMInvoker [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/util/RPMInvoker.java:[24,41] cannot find symbol symbol: class AbstractResponseMessageData_Type location: class com.td.inv.wss.util.RPMInvoker [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/application/InvestmentAPI.java:[4,19] package javax.ws.rs does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/application/InvestmentAPI.java:[5,24] package javax.ws.rs.core does not exist [ERROR] /D:/jenkins/workspace/CBAW/testP/WSW_Investment/src/main/java/com/td/inv/wss/application/InvestmentAPI.java:[9,36] cannot find symbol symbol: class Application Here is my parent POM: ..... <modelVersion>4.0.0</modelVersion> <groupId>group1</groupId> <artifactId>group1-artifact</artifactId> <version>1.0.1</version> <packaging>pom</packaging> <modules> <module>child1</module> </modules> ....... Here is my child POM: ..... <modelVersion>4.0.0</modelVersion> <parent> <groupId>group1</groupId> <artifactId>group1-artifact</artifactId> <version>1.0.1</version> <relativePath>(full url.....)/jenkins-parent-pom//pom.xml</relativePath> </parent> <groupId>group1</groupId> <artifactId>child1</artifactId> <version>0.0.1</version> <packaging>war</packaging> ...... Here is how I tried to inherit dependency in child POM from parent POM: <dependencyManagement> <dependencies> <dependency> <groupId>group1</groupId> <artifactId>group1-artifact</artifactId> <version>1.0.1</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> If I put those same dependencies in the child POM, it works perfectly. I do clean install for installing and deploy for deploying in nexus by using jenkins. I am using maven-3.3.9. In jenkins, I have read the parent and child poms in two different maven projects from git. I want to inherit all the dependencies and plugins from parent POM. Is it possible?
You should declare dependencies you want to inherit under a <dependencies> section to achieve this. <dependencyManagement> is used for definitions that must be referenced later, whenever needed, within the <dependencies> of a particular child to become effective. UPDATE: Be careful when declaring dependencies that every child pom will inherit. Very quickly you can end up having dependencies you don't really need just because they are declared in the parent. As mentioned by other commenters, <dependencyManagement> may be a better choice, although it isn't what you wanted originally.
Jenkins
38,882,221
72
What is the major difference between Job DSL Plugin and Pipeline Plugin both provide way to programmatic job creation which is the best to use as moving ahead and why? if both have similar functionality, do they have different use cases? Since Jenkins 2.0 is focusing on Pipelines as code, does this mean that job-dsl does not have a future or Pipeline Plugin is the next step in the Job DSL Plugin?
I have extensive experience with both. A concise reply is that Job DSL has existed for much longer and was Netflix's open source solution for "coding" Jenkins. It allowed you to introduce logic and variables into scripting your Jenkins jobs and typically one would use these jobs to form some sort of "pipeline" for a particular project. This plugin received quite a bit of traction as a common way to enable job templating and scripting. Jenkins Pipeline (2.0) is a new incarnation of a Jenkins job that is entirely based on a DSL and attempts to eliminate the need to stitch together multiple jobs to fill a single pipeline which was by far the most common use of Job DSL. Originally, with Pipeline DSL not offering many of the features that Job DSL did, and as mentioned above Job DSL would allow you to create Pipeline jobs, they could be used together to define a pipeline. Today, IMO there is little reason to use Job DSL because Pipeline is the Jenkins-supported mechanism for scripting Jenkins pipelines and it has met or surpassed much of the functionality of Job DSL. New plugins are being developed natively for Pipeline, and those that don't are being encouraged by Jenkins developers to integrate with Pipeline. And Pipeline has several advantages: There is no need to "seed" jobs using Pipeline as there is with Job DSL since the Pipeline is the job itself. With Job DSL, it's just a script that creates other jobs. With Pipeline, you have features such as a parameterized manual input step, allowing you specify logic midstream within the pipeline The logic that can be included in a Job DSL is limited to creating the jobs themselves; whereas with Pipeline you can include logic directly inside the job. Job DSL is simply much more difficult to create a basic delivery pipeline using, for example, the Build Pipeline Plugin; using Pipeline your file will be smaller and syntax shorter. And if you're using Job DSL to create Pipeline jobs, I haven't seen a major value for that anymore given the templating features available out-of-the-box with Jenkins Pipeline. Finally, Jenkins Pipeline is by far the most prevalent feature of Jenkins right now. Check out the Jenkins World 2016 agenda and you'll see approx. 50% of the sessions involve pipeline. None for Job DSL.
Jenkins
37,657,810
72
In order to get the fastest feedback possible, we occasionally want Jenkins jobs to run in Parallel. Jenkins has the ability to start multiple downstream jobs (or 'fork' the pipeline) when a job finishes. However, Jenkins doesn't seem to have any way of making a downstream job only start of all branches of that fork succeed (or 'joining' the fork back together). Jenkins has a "Build after other projects are built" button, but I interpret that as "start this job when any upstream job finishes" (not "start this job when all upstream jobs succeed"). Here is a visualization of what I'm talking about. Does anyone know if a plugin exists to do what I'm after? Edit: When I originally posted this question in 2012, Jason's answer (the Join and Promoted Build plugins) was the best, and the solution I went with. However, dnozay's answer (The Build Flow plugin) was made popular a year or so after this question, which is a much better answer. For what it's worth, if people ask me this question today, I now recommend that instead.
Pipeline plugin You can use the Pipeline Plugin (formerly workflow-plugin). It comes with many examples, and you can follow this tutorial. e.g. // build stage 'build' ... // deploy stage 'deploy' ... // run tests in parallel stage 'test' parallel 'functional': { ... }, 'performance': { ... } // promote artifacts stage 'promote' ... Build flow plugin You can also use the Build Flow Plugin. It is simply awesome - but it is deprecated (development frozen). Setting up the jobs Create jobs for: build deploy performance tests functional tests promotion Setting up the upstream in the upstream (here build) create a unique artifact, e.g.: echo ${BUILD_TAG} > build.tag archive the build.tag artifact. record fingerprints to track file usage; if any job copies the same build.tag file and records fingerprints, you will be able to track the parent. Configure to get promoted when promotion job is successful. Setting up the downstream jobs I use 2 parameters PARENT_JOB_NAME and PARENT_BUILD_NUMBER Copy the artifacts from upstream build job using the Copy Artifact Plugin Project name = ${PARENT_JOB_NAME} Which build = ${PARENT_BUILD_NUMBER} Artifacts to copy = build.tag Record fingerprints; that's crucial. Setting up the downstream promotion job Do the same as the above, to establish upstream-downstream relationship. It does not need any build step. You can perform additional post-build actions like "hey QA, it's your turn". Create a build flow job // start with the build parent = build("build") parent_job_name = parent.environment["JOB_NAME"] parent_build_number = parent.environment["BUILD_NUMBER"] // then deploy build("deploy") // then your qualifying tests parallel ( { build("functional tests", PARENT_BUILD_NUMBER: parent_build_number, PARENT_JOB_NAME: parent_job_name) }, { build("performance tests", PARENT_BUILD_NUMBER: parent_build_number, PARENT_JOB_NAME: parent_job_name) } ) // if nothing failed till now... build("promotion", PARENT_BUILD_NUMBER: parent_build_number, PARENT_JOB_NAME: parent_job_name) // knock yourself out... build("more expensive QA tests", PARENT_BUILD_NUMBER: parent_build_number, PARENT_JOB_NAME: parent_job_name) good luck.
Jenkins
9,012,310
72
I think the title sums it up. I just want to know why one or the other is better for continous integration builds of Java projects from Svn.
I agree with this answer, but wanted to add a few points. In short, Hudson (update: Jenkins) is likely the better choice now. First and foremost because creating and configuring jobs ("projects" in CC vocabulary) is just so much faster through Hudson's web UI, compared to editing CruiseControl's XML configuration file (which we used to keep in version control just to keep track of it better). The latter is not especially difficult - it simply is slower and more tedious. CruiseControl has been great, but as noted in Dan Dyer's aptly-named blog post, Why are you still not using Hudson?, it suffers from being first. (Um, like Britain, if you will, later into the industrial revolution, when others started overtaking it with newer technologies.) We used CruiseControl heavily and have gradually switched over to Hudson, finally using it exclusively. And even more heavily: in the process we've started using the CI server for many other things than before, because setting up and managing Hudson jobs is so handy. (We now have some 40+ jobs in Hudson: the usual build & test jobs for stable and development branches; jobs related to releasing (building installers etc); jobs that run some (experimental) metrics against the codebase; ones that run (slow) UI or integration tests against a specific database version; and so on.) From this experience I'd argue that even if you have lots of builds, including complicated ones, Hudson is a pretty safe choice because, like CC, you can use it to do anything, basically. Just configure your job to run whatever Ant or Maven targets, Unix shell scripts, or Windows .bat scripts, in the order you wish. As for the 3rd party stuff (mentioned here by Jeffrey Fredrick) - that is a good point, but my impression is that Hudson is quickly catching up, and that there's already a very large number of plugins available for it. For me, the two things I can name that I miss about CruiseControl are: Its warning emails about broken builds were more informative than those of Hudson. In most cases the root cause was evident from CC's nicely formatted HTML mail itself, whereas with Hudson I usually need to follow the link to Hudson web UI, and click around a little to get the details. The CruiseControl dashboard is better suited, out of the box, as an "information radiator" (shown on a public monitor, or projected on a wall, so that you can always quickly see the status of all projects). With Hudson's front page, we needed some Greasemonkey tricks to get job rows all nicely green/red. Minor disclaimer: I haven't been following the CC project closely for the last year or so. (But from a quick look, it has not changed in any dramatic way.) Note (2011-02-03): Hudson has been renamed/forked as Jenkins (by Hudson creator Kohsuke Kawaguchi and others). It looks as if Oracleβ€”which controls the Hudson nameβ€”will keep "Hudson" around too, but my personal recommendation is to go with Jenkins, no matter what Oracle says.
Jenkins
604,385
72
I have a couple of jobs that use a shared resource (database), which sometimes can cause builds to fail in the (rare) event that the jobs happen to get triggered simultaneously. Given jobs A through E, for example, is there any way to specify that A and C should never be run concurrently? Other than the aforementioned resource, the builds are independent of each other (not e.g. in a upstream/downstream relation). A "brute-force" way would be limiting number of executors to one, but that obviously is less than ideal if most jobs could well be executed concurrently and there's no lack of computing resources on the build server.
There are currently 2 ways of doing this: Use the Throttle Concurrent Builds plugin. Set up those jobs to run on a slave having only 1 executor.
Jenkins
6,276,272
71
I'm using Jenkins v2.1 with the integrated delivery pipeline feature (https://jenkins.io/solutions/pipeline/) to orchestrate two existing builds (build and deploy). In my parameterized build I have 3 user parameters setup, which also needs to be selectable in the pipeline. The pipeline script is as follows: node: { stage 'build' build job: 'build', parameters: [[$class: 'StringParameterValue', name: 'target', value: target], [$class: 'ListSubversionTagsParameterValue', name: 'release', tag: release], [$class: 'BooleanParameterValue', name: 'update_composer', value: update_composer]] stage 'deploy' build job: 'deploy', parameters: [[$class: 'StringParameterValue', name: 'target', value: target]] } This works correctly except for the BooleanParameterValue. When I build the pipeline the following error is thrown: java.lang.ClassCastException: hudson.model.BooleanParameterValue.value expects boolean but received class java.lang.String How can I resolve this typecasting error? Or even better, is there a less cumbersome way in which I can just pass ALL the pipeline parameters to the downstream job.
In addition to Jesse Glick answer, if you want to pass string parameter then use: build job: 'your-job-name', parameters: [ string(name: 'passed_build_number_param', value: String.valueOf(BUILD_NUMBER)), string(name: 'complex_param', value: 'prefix-' + String.valueOf(BUILD_NUMBER)) ]
Jenkins
37,025,175
70
I choose to use "Jenkins's own user database" security realm for user login as I couldn't use LDAP in my company. And Google's OpenID has issue when you decided to change the hostname or port number to something else. And I use "Project-based Matrix Authorization Strategy" schema for my security. But I don't seem to able to create my own group, and add users to the group to manage the permission.
According to this posting by the lead Jenkins developer, Kohsuke Kawaguchi, in 2009, there is no group support for the built-in Jenkins user database. Group support is only usable when integrating Jenkins with LDAP or Active Directory. This appears to be the same in 2012. However, as Vadim wrote in his answer, you don't need group support for the built-in Jenkins user database, thanks to the Role strategy plug-in.
Jenkins
11,855,944
70
Is there a way to get the jobname for the current build in jenkins and pass it as a parameter to an ant build script?
Jenkins sets some environment variables such as JOB_NAME (see here) for more details on the variables set. You can then access these in ant via ${env.JOB_NAME}. Edit: There's also a little howto for environment variables on the same page here.
Jenkins
8,309,383
70
I want to configure jenkins so that it starts building if a new tag is released in any branch of an git repository. How do I configure this behaviour? Triggering:
Set refspec to: +refs/tags/*:refs/remotes/origin/tags/* branch specifier: ** Under build triggers check Build when a change is pushed to GitHub
Jenkins
29,742,847
69
I'm new to Jenkins and git too. I created a remote repository at github.com and made a local copy of it. Then I want to link it through Jenkins. I installed needed plugins for git integration, but I don't know what my local Repository URL is to set it when configuring the new project. Could someone help me where to find it?
In this case, the URL should start with the file protocol followed by the path to the repository. E.g., file:///home/rbkcbeqc/dev/git/gitsandbox.
Jenkins
10,498,554
69
I'm trying to replace our current build pipeline, currently hacked together using old-school Jenkins jobs, with a new job that uses the Jenkins pipeline plugin, and loads a Jenkinsfile from the project repository. One thing that the legacy job did was set the build description to include the Mercurial hash, username and current version using the Description setter plugin, so that builds are easy to find. Is there a way to replicate/emulate this behaviour with the Jenkins pipeline plugin?
Just figured it out. The pipeline job exposes a currentBuild global variable with writable properties. Setting the description can be done with: currentBuild.description = "my new description" anywhere in the pipeline script. More information in this DZone tutorial.
Jenkins
36,501,203
68
I have a jenkins job that clones the repository from github, then runs the powershell script that increments the version number in the file. I'm now trying to publish that update file back to the original repository on github, so when developer pulls the changes he gets the latest version number. I tried using Git Publisher in the post build events, and I can publish tags with no issues, but it doesn't seem to publish any files.
The git checkout master of the answer by Woland isn't needed. Instead use the "Checkout to specific local branch" in the "Additional Behaviors" section to set the "Branch name" to master. The git commit -am "blah" is still needed. Now you can use the "Git Publisher" under "Post-build Actions" to push the changes. Be sure to specify the "Branches" to push ("Branch to push" = master, "Target remote name" = origin). "Merge Results" isn't needed.
Jenkins
19,922,435
68
I am struggling with an error with a multi-modules project, the struture is simple, it looks like this : root module a module b module c pom.xml After using the maven command line : clean sonar:sonar deploy I have this error : Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.3.0.603:sonar (default-cli) on project X : Please provide compiled classes of your project with sonar.java.binaries property -> [Help 1] EDIT : Here is the structure of my pom.xml <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <groupId>groupeId</groupId> <artifactId>artifactId</artifactId> <version>version</version> <packaging>pom</packaging> <name>${project.artifactId}-parent</name> <description>description</description> <build> <plugins> <plugin> <groupId>org.sonarsource.scanner.maven</groupId> <artifactId>sonar-maven-plugin</artifactId> <version>3.3.0.603</version> </plugin> </plugins> </build> <modules> <module>module a</module> <module>module b</module> <module>module c</module> </modules> </project>
You're running your Maven steps in the wrong order: clean - delete all previous build output sonar:sonar - run analysis (which requires build output) deploy - build &etc... Try this instead: mvn clean deploy sonar:sonar Now if you're about to object that you don't want to actually "deploy" the jar until/unless the changed code passes the Quality Gate, well... that requires a different workflow: mvn clean package sonar:sonar // check quality gate status // if (qualityGateOk) { deploy } The particulars of those last two steps will depend on your CI infrastructure. But for Jenkins, step #2 is well documented
Jenkins
46,976,567
67
I'm new to Jenkins pipeline; I'm defining a declarative syntax pipeline and I don't know if I can solve my problem, because I didn't find a solution. In this example, I need to pass a variable to ansible plugin (in old version I use an ENV_VAR or injecting it from file with inject plugin) that variable comes from a script. This is my perfect scenario (but it doesn't work because environment{}): pipeline { agent { node { label 'jenkins-node'}} stages { stage('Deploy') { environment { ANSIBLE_CONFIG = '${WORKSPACE}/chimera-ci/ansible/ansible.cfg' VERSION = sh("python3.5 docker/get_version.py") } steps { ansiblePlaybook credentialsId: 'example-credential', extras: '-e version=${VERSION}', inventory: 'development', playbook: 'deploy.yml' } } } } I tried other ways to test how env vars work in other post, example: pipeline { agent { node { label 'jenkins-node'}} stages { stage('PREPARE VARS') { steps { script { env['VERSION'] = sh(script: "python3.5 get_version.py") } echo env.VERSION } } } } but "echo env.VERSION" return null. Also tried the same example with: - VERSION=python3.5 get_version.py - VERSION=python3.5 get_version.py > props.file (and try to inject it, but didnt found how) If this is not possible I will do it in the ansible role. UPDATE There is another "issue" in Ansible Plugin, to use vars in extra vars it must have double quotes instead of single. ansiblePlaybook credentialsId: 'example-credential', extras: "-e version=${VERSION}", inventory: 'development', playbook: 'deploy.yml'
You can create variables before the pipeline block starts. You can have sh return stdout to assign to these variables. You don't have the same flexibility to assign to environment variables in the environment stanza. So substitute in python3.5 get_version.py where I have echo 0.0.1 in the script here (and make sure your python script just returns the version to stdout): def awesomeVersion = 'UNKNOWN' pipeline { agent { label 'docker' } stages { stage('build') { steps { script { awesomeVersion = sh(returnStdout: true, script: 'echo 0.0.1').trim() } } } stage('output_version') { steps { echo "awesomeVersion: ${awesomeVersion}" } } } } The output of the above pipeline is: awesomeVersion: 0.0.1
Jenkins
43,879,733
67
Using the Pipeline plugin in Jenkins 2.x, how can I access a Groovy variable that is defined somewhere at stage- or node-level from within a sh step? Simple example: node { stage('Test Stage') { some_var = 'Hello World' // this is Groovy echo some_var // printing via Groovy works sh 'echo $some_var' // printing in shell does not work } } gives the following on the Jenkins output page: [Pipeline] { [Pipeline] stage [Pipeline] { (Test Stage) [Pipeline] echo Hello World [Pipeline] sh [test] Running shell script + echo [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline Finished: SUCCESS As one can see, echo in the sh step prints an empty string. A work-around would be to define the variable in the environment scope via env.some_var = 'Hello World' and print it via sh 'echo ${env.some_var}' However, this kind of abuses the environmental scope for this task.
To use a templatable string, where variables are substituted into a string, use double quotes. sh "echo $some_var"
Jenkins
39,982,414
67